Sample records for computers running linux

  1. TICK: Transparent Incremental Checkpointing at Kernel Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  2. Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters

    DTIC Science & Technology

    2015-01-01

    onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F

  3. Impact of the Shodan Computer Search Engine on Internet-facing Industrial Control System Devices

    DTIC Science & Technology

    2014-03-27

    bridge implementation. The transparent bridge is designed using a Raspberry Pi configured with Linux IPtables and bridge-utils to bridge the on board...Ethernet card and a second USB Ethernet adapter. A Raspberry Pi is a credit-card-sized single-board computer running a version of Debian Linux. There

  4. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov Websites

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  5. Integrating a Trusted Computing Base Extension Server and Secure Session Server into the LINUX Operating System

    DTIC Science & Technology

    2001-09-01

    Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the

  6. Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual

    DTIC Science & Technology

    2005-02-01

    on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication

  7. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  8. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  9. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  10. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  11. VizieR Online Data Catalog: RefleX : X-ray-tracing code (Paltani+, 2017)

    NASA Astrophysics Data System (ADS)

    Paltani, S.; Ricci, C.

    2017-11-01

    We provide here the RefleX executable, for both Linux and MacOSX, together with the User Manual and example script file and output file Running (for instance): reflex_linux will produce the file reflex.out Note that the results may differ slightly depending on the OS, because of slight differences in some implementations numerical computations. The difference are scientifically meaningless. (5 data files).

  12. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  13. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  14. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less

  15. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  16. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  17. CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.

    PubMed

    Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav

    2017-11-15

    CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Running GUI Applications on Peregrine from OSX | High-Performance Computing

    Science.gov Websites

    Learn how to use Virtual Network Computing to access a Linux graphical desktop environment on Peregrine local port (on, e.g., your laptop), starts a VNC server process that manages a virtual desktop on your virtual desktop. This is persistent, so remember it-you will use this password whenever accessing

  19. Power Monitoring Using the Raspberry Pi

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    2014-01-01

    The Raspberry Pi is a credit card size low powered compute board with Ethernet connection, HDMI video output, audio, full Linux operating system run from an SD card, and more, all for $45. With cables, SD card, etc., the cost is about $70. Originally designed to help teach computer science principles to low income children and students, the Pi has…

  20. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  1. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  2. XVD Image Display Program

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul

    2009-01-01

    The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.

  3. Arlequin suite ver 3.5: a new series of programs to perform population genetics analyses under Linux and Windows.

    PubMed

    Excoffier, Laurent; Lischer, Heidi E L

    2010-05-01

    We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.

  4. RTSPM: real-time Linux control software for scanning probe microscopy.

    PubMed

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  5. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  6. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  7. Mars Science Laboratory Workstation Test Set

    NASA Technical Reports Server (NTRS)

    Henriquez, David A.; Canham, Timothy K.; Chang, Johnny T.; Villaume, Nathaniel

    2009-01-01

    The Mars Science Laboratory developed the Workstation TestSet (WSTS) is a computer program that enables flight software development on virtual MSL avionics. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC.

  8. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  9. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguckin, Theodore

    2008-10-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machinemore » was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period (including a discussion of« less

  10. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    ERIC Educational Resources Information Center

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  11. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Introduction to LINUX OS for new LINUX users - Basic Information Before Using The Kurucz Codes Under LINUX-.

    NASA Astrophysics Data System (ADS)

    Çay, M. Taşkin

    Recently the ATLAS suite (Kurucz) was ported to LINUX OS (Sbordone et al.). Those users of the suite unfamiliar with LINUX need to know some basic information to use these versions. This paper is a quick overview and introduction to LINUX OS. The reader is highly encouraged to own a book on LINUX OS for comprehensive use. Although the subjects and examples in this paper are for general use, they to help with the installation and running the ATLAS suite.

  13. NSTX-U Control System Upgrades

    DOE PAGES

    Erickson, K. G.; Gates, D. A.; Gerhardt, S. P.; ...

    2014-06-01

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forwardmore » port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.« less

  14. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  15. Timing characterization and analysis of the Linux-based, closed loop control computer for the Subaru Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Dinkins, Matthew; Colley, Stephen

    2008-07-01

    Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.

  16. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  17. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  18. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  19. Alcator C-Mod Digital Plasma Control System

    NASA Astrophysics Data System (ADS)

    Wolfe, S. M.

    2005-10-01

    A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.

  20. Volunteer Computing Experience with ATLAS@Home

    NASA Astrophysics Data System (ADS)

    Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.; ATLAS Collaboration

    2017-10-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  2. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  3. Real-time Experiment Interface for Biological Control Applications

    PubMed Central

    Lin, Risa J.; Bettencourt, Jonathan; White, John A.; Christini, David J.; Butera, Robert J.

    2013-01-01

    The Real-time Experiment Interface (RTXI) is a fast and versatile real-time biological experimentation system based on Real-Time Linux. RTXI is open source and free, can be used with an extensive range of experimentation hardware, and can be run on Linux or Windows computers (when using the Live CD). RTXI is currently used extensively for two experiment types: dynamic patch clamp and closed-loop stimulation pattern control in neural and cardiac single cell electrophysiology. RTXI includes standard plug-ins for implementing commonly used electrophysiology protocols with synchronized stimulation, event detection, and online analysis. These and other user-contributed plug-ins can be found on the website (http://www.rtxi.org). PMID:21096883

  4. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  5. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  6. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  7. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  8. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  9. Real Time Linux - The RTOS for Astronomy?

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.

  10. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  11. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  13. Computation and Validation of the Dynamic Response Index (DRI)

    DTIC Science & Technology

    2013-08-06

    matplotlib plotting library. • Executed from command line. • Allows several optional arguments. • Runs on Windows, Linux, UNIX, and Mac OS X. 10... vs . Time: Triangular pulse input data with given time duration and peak acceleration: Time (s) EARTH Code: Motivation • Error Assessment of...public release • ARC provided electrothermal battery model example: • Test vs . simulation data for terminal voltage. • EARTH input parameters

  14. Estimating aquifer transmissivity from specific capacity using MATLAB.

    PubMed

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  15. Efficient Comparison between Windows and Linux Platform Applicable in a Virtual Architectural Walkthrough Application

    NASA Astrophysics Data System (ADS)

    Thubaasini, P.; Rusnida, R.; Rohani, S. M.

    This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.

  16. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    DOE PAGES

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less

  17. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  18. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  19. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  20. MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Timossi, Chris

    2006-10-19

    Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.

  1. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    PubMed Central

    Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena

    2011-01-01

    This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes. PMID:22346579

  2. Development of an Autonomous Navigation Technology Test Vehicle

    DTIC Science & Technology

    2004-08-01

    as an independent thread on processors using the Linux operating system. The computer hardware selected for the nodes that host the MRS threads...communications system design. Linux was chosen as the operating system for all of the single board computers used on the Mule. Linux was specifically...used for system analysis and development. The simple realization of multi-thread processing and inter-process communications in Linux made it a

  3. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  4. Cross platform development using Delphi and Kylix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  5. ALMA test interferometer control system: past experiences and future developments

    NASA Astrophysics Data System (ADS)

    Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken

    2004-09-01

    The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.

  6. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  7. Injecting Artificial Memory Errors Into a Running Computer Program

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  8. Mushu, a free- and open source BCI signal acquisition, written in Python.

    PubMed

    Venthur, Bastian; Blankertz, Benjamin

    2012-01-01

    The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.

  9. Soft Real-Time PID Control on a VME Computer

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  10. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  11. Semi-Automated Identification of Rocks in Images

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Castano, Andres; Anderson, Robert

    2006-01-01

    Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.

  12. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    PubMed

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  13. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  14. System Security Authorization Agreement (SSAA) for the WIRE Archive and Research Facility

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Wide-Field Infrared Explorer (WIRE) Archive and Research Facility (WARF) is operated and maintained by the Department of Physics, USAF Academy. The lab is located in Fairchild Hall, 2354 Fairchild Dr., Suite 2A103, USAF Academy, CO 80840. The WARF will be used for research and education in support of the NASA Wide Field Infrared Explorer (WIRE) satellite, and for related high-precision photometry missions and activities. The WARF will also contain the WIRE preliminary and final archives prior to their delivery to the National Space Science Data Center (NSSDC). The WARF consists of a suite of equipment purchased under several NASA grants in support of WIRE research. The core system consists of a Red Hat Linux workstation with twin 933 MHz PIII processors, 1 GB of RAM, 133 GB of hard disk space, and DAT and DLT tape drives. The WARF is also supported by several additional networked Linux workstations. Only one of these (an older 450 Mhz PIII computer running Red Hat Linux) is currently running, but the addition of several more is expected over the next year. In addition, a printer will soon be added. The WARF will serve as the primary research facility for the analysis and archiving of data from the WIRE satellite, together with limited quantities of other high-precision astronomical photometry data from both ground- and space-based facilities. However, the archive to be created here will not be the final archive; rather, the archive will be duplicated at the NSSDC and public access to the data will generally take place through that site.

  15. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  16. Open discovery: An integrated live Linux platform of Bioinformatics tools

    PubMed Central

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235

  17. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  18. TmoleX--a graphical user interface for TURBOMOLE.

    PubMed

    Steffen, Claudia; Thomas, Klaus; Huniar, Uwe; Hellweg, Arnim; Rubner, Oliver; Schroer, Alexander

    2010-12-01

    We herein present the graphical user interface (GUI) TmoleX for the quantum chemical program package TURBOMOLE. TmoleX allows users to execute the complete workflow of a quantum chemical investigation from the initial building of a structure to the visualization of the results in a user friendly graphical front end. The purpose of TmoleX is to make TURBOMOLE easy to use and to provide a high degree of flexibility. Hence, it should be a valuable tool for most users from beginners to experts. The program is developed in Java and runs on Linux, Windows, and Mac platforms. It can be used to run calculations on local desktops as well as on remote computers. © 2010 Wiley Periodicals, Inc.

  19. mr: A C++ library for the matching and running of the Standard Model parameters

    NASA Astrophysics Data System (ADS)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS

  20. HEP Computing

    Science.gov Websites

    Computing Visitors who do not need a HEP linux account Visitors with laptops can use wireless network HEP linux account Step 1: Click Here for New Account Application After submitting the application, you

  1. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  2. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  3. birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.

    PubMed

    Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir

    2011-05-01

    birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.

  4. Plancton: an opportunistic distributed computing project based on Docker containers

    NASA Astrophysics Data System (ADS)

    Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara

    2017-10-01

    The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.

  5. Comparison of fMRI data analysis by SPM99 on different operating systems.

    PubMed

    Shinagawa, Hideo; Honda, Ei-ichi; Ono, Takashi; Kurabayashi, Tohru; Ohyama, Kimie

    2004-09-01

    The hardware chosen for fMRI data analysis may depend on the platform already present in the laboratory or the supporting software. In this study, we ran SPM99 software on multiple platforms to examine whether we could analyze fMRI data by SPM99, and to compare their differences and limitations in processing fMRI data, which can be attributed to hardware capabilities. Six normal right-handed volunteers participated in a study of hand-grasping to obtain fMRI data. Each subject performed a run that consisted of 98 images. The run was measured using a gradient echo-type echo planar imaging sequence on a 1.5T apparatus with a head coil. We used several personal computer (PC), Unix and Linux machines to analyze the fMRI data. There were no differences in the results obtained on several PC, Unix and Linux machines. The only limitations in processing large amounts of the fMRI data were found using PC machines. This suggests that the results obtained with different machines were not affected by differences in hardware components, such as the CPU, memory and hard drive. Rather, it is likely that the limitations in analyzing a huge amount of the fMRI data were due to differences in the operating system (OS).

  6. Real-time plasma control based on the ISTTOK tomography diagnostica)

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.

    2008-10-01

    The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.

  7. MCR Container Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less

  8. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  9. Models@Home: distributed computing in bioinformatics using a screensaver based approach.

    PubMed

    Krieger, Elmar; Vriend, Gert

    2002-02-01

    Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.

  10. WAZA-ARI: computational dosimetry system for X-ray CT examinations II: development of web-based system.

    PubMed

    Ban, Nobuhiko; Takahashi, Fumiaki; Ono, Koji; Hasegawa, Takayuki; Yoshitake, Takayasu; Katsunuma, Yasushi; Sato, Kaoru; Endo, Akira; Kai, Michiaki

    2011-07-01

    A web-based dose computation system, WAZA-ARI, is being developed for patients undergoing X-ray CT examinations. The system is implemented in Java on a Linux server running Apache Tomcat. Users choose scanning options and input parameters via a web browser over the Internet. Dose coefficients, which were calculated in a Japanese adult male phantom (JM phantom) are called upon user request and are summed over the scan range specified by the user to estimate a normalised dose. Tissue doses are finally computed based on the radiographic exposure (mA s) and the pitch factor. While dose coefficients are currently available only for limited CT scanner models, the system has achieved a high degree of flexibility and scalability without the use of commercial software.

  11. Mendel-GPU: haplotyping and genotype imputation on graphics processing units

    PubMed Central

    Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth

    2012-01-01

    Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633

  12. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  13. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  14. Bioinformatics on the cloud computing platform Azure.

    PubMed

    Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.

  15. Bioinformatics on the Cloud Computing Platform Azure

    PubMed Central

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  16. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    NASA Technical Reports Server (NTRS)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  17. Geowall: Investigations into low-cost stereo display technologies

    USGS Publications Warehouse

    Steinwand, Daniel R.; Davis, Brian; Weeks, Nathan

    2003-01-01

    Recently, the combination of new projection technology, fast, low-cost graphics cards, and Linux-powered personal computers has made it possible to provide a stereoprojection and stereoviewing system that is much more affordable than previous commercial solutions. These Geowall systems are low-cost visualization systems built with commodity off-the-shelf components, run on open-source (and other) operating systems, and using open-source applications software. In short, they are ?Beowulf-class? visualization systems that provide a cost-effective way for the U. S. Geological Survey to broaden participation in the visualization community and view stereoimagery and three-dimensional models2.

  18. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  19. NSTX-U Advances in Real-Time C++11 on Linux

    NASA Astrophysics Data System (ADS)

    Erickson, Keith G.

    2015-08-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  20. Missed deadline notification in best-effort schedulers

    NASA Astrophysics Data System (ADS)

    Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.

    2003-12-01

    It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.

  1. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  2. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  3. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  4. Porting and refurbishment of the WSS TNG control software

    NASA Astrophysics Data System (ADS)

    Caproni, Alessandro; Zacchei, Andrea; Vuerli, Claudio; Pucillo, Mauro

    2004-09-01

    The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies, like multi-threading, and the possibility to develop high level WSS applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.

  5. Control of the TSU 2-m automatic telescope

    NASA Astrophysics Data System (ADS)

    Eaton, Joel A.; Williamson, Michael H.

    2004-09-01

    Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.

  6. Drowning in PC Management: Could a Linux Solution Save Us?

    ERIC Educational Resources Information Center

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  7. Scilab software package for the study of dynamical systems

    NASA Astrophysics Data System (ADS)

    Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.

    2008-05-01

    This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.

  8. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  9. Computational Support for Technology- Investment Decisions

    NASA Technical Reports Server (NTRS)

    Adumitroaie, Virgil; Hua, Hook; Lincoln, William; Block, Gary; Mrozinski, Joseph; Shelton, Kacie; Weisbin, Charles; Elfes, Alberto; Smith, Jeffrey

    2007-01-01

    Strategic Assessment of Risk and Technology (START) is a user-friendly computer program that assists human managers in making decisions regarding research-and-development investment portfolios in the presence of uncertainties and of non-technological constraints that include budgetary and time limits, restrictions related to infrastructure, and programmatic and institutional priorities. START facilitates quantitative analysis of technologies, capabilities, missions, scenarios and programs, and thereby enables the selection and scheduling of value-optimal development efforts. START incorporates features that, variously, perform or support a unique combination of functions, most of which are not systematically performed or supported by prior decision- support software. These functions include the following: Optimal portfolio selection using an expected-utility-based assessment of capabilities and technologies; Temporal investment recommendations; Distinctions between enhancing and enabling capabilities; Analysis of partial funding for enhancing capabilities; and Sensitivity and uncertainty analysis. START can run on almost any computing hardware, within Linux and related operating systems that include Mac OS X versions 10.3 and later, and can run in Windows under the Cygwin environment. START can be distributed in binary code form. START calls, as external libraries, several open-source software packages. Output is in Excel (.xls) file format.

  10. PREMER: a Tool to Infer Biological Networks.

    PubMed

    Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R

    2017-10-04

    Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).

  11. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    PubMed

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  12. SeedVicious: Analysis of microRNA target and near-target sites.

    PubMed

    Marco, Antonio

    2018-01-01

    Here I describe seedVicious, a versatile microRNA target site prediction software that can be easily fitted into annotation pipelines and run over custom datasets. SeedVicious finds microRNA canonical sites plus other, less efficient, target sites. Among other novel features, seedVicious can compute evolutionary gains/losses of target sites using maximum parsimony, and also detect near-target sites, which have one nucleotide different from a canonical site. Near-target sites are important to study population variation in microRNA regulation. Some analyses suggest that near-target sites may also be functional sites, although there is no conclusive evidence for that, and they may actually be target alleles segregating in a population. SeedVicious does not aim to outperform but to complement existing microRNA prediction tools. For instance, the precision of TargetScan is almost doubled (from 11% to ~20%) when we filter predictions by the distance between target sites using this program. Interestingly, two adjacent canonical target sites are more likely to be present in bona fide target transcripts than pairs of target sites at slightly longer distances. The software is written in Perl and runs on 64-bit Unix computers (Linux and MacOS X). Users with no computing experience can also run the program in a dedicated web-server by uploading custom data, or browse pre-computed predictions. SeedVicious and its associated web-server and database (SeedBank) are distributed under the GPL/GNU license.

  13. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.

  14. Embedded systems for supporting computer accessibility.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  15. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  16. Grid Computing Environment using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Alanis, Fransisco; Mahmood, Akhtar

    2003-10-01

    Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  17. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  18. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    NASA Technical Reports Server (NTRS)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  19. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  20. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  1. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    DTIC Science & Technology

    2013-09-01

    solutions to virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a...virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a specific version of...Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302

  2. Development of a platform-independent receiver control system for SISIFOS

    NASA Astrophysics Data System (ADS)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  3. NSTX-U Advances in Real-Time C++11 on Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Keith G.

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  4. NSTX-U Advances in Real-Time C++11 on Linux

    DOE PAGES

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  5. Limits, discovery and cut optimization for a Poisson process with uncertainty in background and signal efficiency: TRolke 2.0

    NASA Astrophysics Data System (ADS)

    Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.

    2010-03-01

    A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.

  6. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  7. MCdevelop - a universal framework for Stochastic Simulations

    NASA Astrophysics Data System (ADS)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.

  8. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    NASA Astrophysics Data System (ADS)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  9. The Research on Linux Memory Forensics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  10. Speckle interferometry. Data acquisition and control for the SPID instrument.

    NASA Astrophysics Data System (ADS)

    Altarac, S.; Tallon, M.; Thiebaut, E.; Foy, R.

    1998-08-01

    SPID (SPeckle Imaging by Deconvolution) is a new speckle camera currently under construction at CRAL-Observatoire de Lyon. Its high spectral resolution and high image restoration capabilities open new astrophysical programs. The instrument SPID is composed of four main optical modules which are fully automated and computer controlled by a software written in Tcl/Tk/Tix and C. This software provides an intelligent assistance to the user by choosing observational parameters as a function of atmospheric parameters, computed in real time, and the desired restored image quality. Data acquisition is made by a photon-counting detector (CP40). A VME-based computer under OS9 controls the detector and stocks the data. The intelligent system runs under Linux on a PC. A slave PC under DOS commands the motors. These 3 computers communicate through an Ethernet network. SPID can be considered as a precursor for VLT's (Very Large Telescope, four 8-meter telescopes currently built in Chile by European Southern Observatory) very high spatial resolution camera.

  11. FTAP: a Linux-based program for tapping and music experiments.

    PubMed

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  12. Soft control of scanning probe microscope with high flexibility.

    PubMed

    Liu, Zhenghui; Guo, Yuzheng; Zhang, Zhaohui; Zhu, Xing

    2007-01-01

    Most commercial scanning probe microscopes have multiple embedded digital microprocessors and utilize complex software for system control, which is not easily obtained or modified by researchers wishing to perform novel and special applications. In this paper, we present a simple and flexible control solution that just depends on software running on a single-processor personal computer with real-time Linux operating system to carry out all the control tasks including negative feedback, tip moving, data processing and user interface. In this way, we fully exploit the potential of a personal computer in calculating and programming, enabling us to manipulate the scanning probe as required without any special digital control circuits and related technical know-how. This solution has been successfully applied to a homemade ultrahigh vacuum scanning tunneling microscope and a multiprobe scanning tunneling microscope.

  13. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  14. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  15. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    NASA Astrophysics Data System (ADS)

    Muruganandam, P.; Adhikari, S. K.

    2009-10-01

    Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 122 907 No. of bytes in distributed program, including test data, etc.: 609 662 Distribution format: tar.gz Programming language: FORTRAN 77 and Fortran 90/95 Computer: PC Operating system: Linux, Unix RAM: 1 GByte (i, iv, v), 2 GByte (ii, vi, vii, x, xi), 4 GByte (iii, viii, xii), 8 GByte (ix) Classification: 2.9, 4.3, 4.12 Nature of problem: These programs are designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-, two- or three-space dimensions with a harmonic, circularly-symmetric, spherically-symmetric, axially-symmetric or anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Solution method: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields the solution of stationary and/or non-stationary problems. Additional comments: This package consists of 12 programs, see "Program title", above. FORTRAN77 versions are provided for each of the 12 and, in addition, Fortran 90/95 versions are included for ii, iii, vi, viii, ix, xii. For the particular purpose of each program please see the below. Running time: Minutes on a medium PC (i, iv, v, vii, x, xi), a few hours on a medium PC (ii, vi, viii, xii), days on a medium PC (iii, ix). Program summary (1)Title of program: imagtime1d.F Title of electronic file: imagtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (2)Title of program: imagtimecir.F Title of electronic file: imagtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (3)Title of program: imagtimesph.F Title of electronic file: imagtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (4)Title of program: realtime1d.F Title of electronic file: realtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (5)Title of program: realtimecir.F Title of electronic file: realtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (6)Title of program: realtimesph.F Title of electronic file: realtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (7)Title of programs: imagtimeaxial.F and imagtimeaxial.f90 Title of electronic file: imagtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (8)Title of program: imagtime2d.F and imagtime2d.f90 Title of electronic file: imagtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (9)Title of program: realtimeaxial.F and realtimeaxial.f90 Title of electronic file: realtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (10)Title of program: realtime2d.F and realtime2d.f90 Title of electronic file: realtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (11)Title of program: imagtime3d.F and imagtime3d.f90 Title of electronic file: imagtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (12)Title of program: realtime3d.F and realtime3d.f90 Title of electronic file: realtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum Ram Memory: 8 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems.

  16. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Robert; Rivers, Wilmer

    any single computer program for seismic data analysis will not have all the capabilities needed to study reference events, since hese detailed studies will be highly specialized. It may be necessary to develop and test new algorithms, and then these special ;odes must be integrated with existing software to use their conventional data-processing routines. We have investigated two neans of establishing communications between the legacy and new codes: CORBA and XML/SOAP Web services. We have nvestigated making new Java code communicate with a legacy C-language program, geotool, running under Linux. Both methods vere successful, but both were difficult to implement.more » C programs on UNIX/Linux are poorly supported for Web services, compared vith the Java and .NET languages and platforms. Easier-to-use middleware will be required for scientists to construct distributed applications as easily as stand-alone ones. Considerable difficulty was encountered in modifying geotool, and this problem shows he need to use component-based user interfaces instead of large C-language codes where changes to one part of the program nay introduce side effects into other parts. We have nevertheless made bug fixes and enhancements to that legacy program, but t remains difficult to expand it through communications with external software.« less

  18. Towards a new Mercator Observatory Control System

    NASA Astrophysics Data System (ADS)

    Pessemier, W.; Raskin, G.; Prins, S.; Saey, P.; Merges, F.; Padilla, J. P.; Van Winckel, H.; Waelkens, C.

    2010-07-01

    A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.

  19. Scalable and Accurate SMT-Based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-31

    accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...be accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of Linux, Mac OS

  20. Kernel-based Linux emulation for Plan 9.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less

  1. Comparative Analysis of Active and Passive Mapping Techniques in an Internet-Based Local Area Network

    DTIC Science & Technology

    2004-03-01

    PIII/500 (K) 512 A11 3C905 Honeynet PIII/1000 (C) 512 A11 3C905 Generator PIII/800 (C) 256 A11 3C905 Each system is running Debian GNU / Linux “unstable...Network,” September 2000. http://www.issues.af.mil/notams/notam00-5.html; accessed January 16, 2004. 5. “Debian GNU / Linux 3.0 Released,” Debian News...interact with those servers. 1.5 Summary The remainder of this document is organized into four chapters. Chapter 2 con - tains the literature review where

  2. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  3. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  4. A Commodity Computing Cluster

    NASA Astrophysics Data System (ADS)

    Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.

    We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.

  5. Gener: a minimal programming module for chemical controllers based on DNA strand displacement

    PubMed Central

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-01-01

    Summary: Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research’s DSD tool as well as to LaTeX. Availability and implementation: Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. Contact: ozan@cosbi.eu PMID:25957353

  6. Gener: a minimal programming module for chemical controllers based on DNA strand displacement.

    PubMed

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-09-01

    : Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research's DSD tool as well as to LaTeX. Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. ozan@cosbi.eu. © The Author 2015. Published by Oxford University Press.

  7. ChemoPy: freely available python package for computational biology and chemoinformatics.

    PubMed

    Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng

    2013-04-15

    Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.

  8. Producing genome structure populations with the dynamic and automated PGS software.

    PubMed

    Hua, Nan; Tjong, Harianto; Shin, Hanjun; Gong, Ke; Zhou, Xianghong Jasmine; Alber, Frank

    2018-05-01

    Chromosome conformation capture technologies such as Hi-C are widely used to investigate the spatial organization of genomes. Because genome structures can vary considerably between individual cells of a population, interpreting ensemble-averaged Hi-C data can be challenging, in particular for long-range and interchromosomal interactions. We pioneered a probabilistic approach for the generation of a population of distinct diploid 3D genome structures consistent with all the chromatin-chromatin interaction probabilities from Hi-C experiments. Each structure in the population is a physical model of the genome in 3D. Analysis of these models yields new insights into the causes and the functional properties of the genome's organization in space and time. We provide a user-friendly software package, called PGS, which runs on local machines (for practice runs) and high-performance computing platforms. PGS takes a genome-wide Hi-C contact frequency matrix, along with information about genome segmentation, and produces an ensemble of 3D genome structures entirely consistent with the input. The software automatically generates an analysis report, and provides tools to extract and analyze the 3D coordinates of specific domains. Basic Linux command-line knowledge is sufficient for using this software. A typical running time of the pipeline is ∼3 d with 300 cores on a computer cluster to generate a population of 1,000 diploid genome structures at topological-associated domain (TAD)-level resolution.

  9. Malware Memory Analysis of the IVYL Linux Rootkit: Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2015-04-01

    report is to examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills ...The skills amassed by incident handlers and investigators alike while using Volatility to examine Windows memory images will be of some help...bin/pulseaudio --start --log-target=syslog 1362 1000 1000 nautilus 1366 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1370 1000 1000 nm-applet

  10. 10 Gigabit Ethernet Performance on SGI Altix and Origin Systems

    NASA Technical Reports Server (NTRS)

    Meyer, Andy

    2005-01-01

    As the state of high performance computing continues to advance, the size of datasets continue to grow, driving a need for high bandwidth data networks. family of networks. 10 Gigabit Ethernet is the latest step in the popular Ethernet We have evaluated the S2io Xframe 10 Gigabit Ethernet adapter on 512p SGI Altix systems running ProPack 3, and Origin systems running Irix 6.5.24 and 6.5.26 in our production supercomputing environment. We encountered a number of performance and stability issues, which were promptly dealt with by SGI and S2io. Using nttcp we tested TCP performance for single and multiple streams, and we tested file transfer using NFS and bbftp. We will present the results of our testing, including the effects of various tuning options on throughput and CPU utilization, and offer suggestions for configuring and tuning S2io 10 Gigabit Ethernet cards in an Altix/Linux or Origin/Irix environment.

  11. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  12. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Masuoka, C. M.; Frey, H. V.

    2004-01-01

    GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.

  13. New Focal Plane Array Controller for the Instruments of the Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Nakaya, Hidehiko; Komiyama, Yutaka; Miyazaki, Satoshi; Yamashita, Takuya; Yagi, Masafumi; Sekiguchi, Maki

    2006-03-01

    We have developed a next-generation data acquisition system, MESSIA5 (Modularized Extensible System for Image Acquisition), which comprises the digital part of a focal plane array controller. The new data acquisition system was constructed based on a 64 bit, 66 MHz PCI (peripheral component interconnect) bus architecture and runs on an x86 CPU computer with (non-real-time) Linux. The system, including the CPU board, is placed at the telescope focus, and standard gigabit Ethernet is adopted for the data transfer, as opposed to a dedicated fiber link. During the summer of 2002, we installed the new system for the first time on the Subaru prime-focus camera Suprime-Cam and successfully improved the observing performance.

  14. How do I resolve problems reading the binary data?

    Atmospheric Science Data Center

    2014-12-08

    ... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...

  15. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  16. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  17. Level 1 Processing of MODIS Direct Broadcast Data From Terra

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Smith, Peter; Shotland, Larry; El-Ghazawi, Tarek; Zhu, Ming

    2000-01-01

    In February 2000, an effort was begun to adapt the Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1 production software to process direct broadcast data. Three Level 1 algorithms have been adapted and packaged for release: Level 1A converts raw (level 0) data into Hierarchical Data Format (HDF), unpacking packets into scans; Geolocation computes geographic information for the data points in the Level 1A; and the Level 1B computes geolocated, calibrated radiances from the Level 1A and Geolocation products. One useful aspect of adapting the production software is the ability to incorporate enhancements contributed by the MODIS Science Team. We have therefore tried to limit changes to the software. However, in order to process the data immediately on receipt, we have taken advantage of a branch in the geolocation software that reads orbit and altitude information from the packets themselves, rather than external ancillary files used in standard production. We have also verified that the algorithms can be run with smaller time increments (2.5 minutes) than the five-minute increments used in production. To make the code easier to build and run, we have simplified directories and build scripts. Also, dependencies on a commercial numerics library have been replaced by public domain software. A version of the adapted code has been released for Silicon Graphics machines running lrix. Perhaps owing to its origin in production, the software is rather CPU-intensive. Consequently, a port to Linux is underway, followed by a version to run on PC clusters, with an eventual goal of running in near-real-time (i.e., process a ten-minute pass in ten minutes).

  18. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.

  19. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  20. Linux Adventures on a Laptop. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  1. Chicks in Charge: Andrea Baker & Amy Daniels--Airport High School Media Center, Columbia, SC

    ERIC Educational Resources Information Center

    Library Journal, 2004

    2004-01-01

    This article briefly discusses two librarians exploration of Linux. Andrea Baker and Amy Daniels were tired of telling their students that new technology items were not in the budget. They explored Linux, which is a program that recycles older computers, installs free operating systems and free software.

  2. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  3. 75 FR 47609 - U.S. Customs and Border Protection; Notice of Issuance of Final Determination Concerning a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... Denver, Colorado. Communication Manager is designed to run on a variety of Linux-based media servers.... Some servers are in the form of blades. These are cards (similar to printed circuit cards with...

  4. Unix survival guide.

    PubMed

    Stein, Lincoln D

    2007-01-01

    For a mixture of historical and practical reasons, much of the bioinformatics software discussed in this series runs on Linux, Mac OSX, Solaris, or one of the many other Unix variants. This appendix provides the novice with easy-to-understand information needed to survive in the Unix environment.

  5. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  6. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  7. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  8. General Mission Analysis Tool (GMAT) Architectural Specification. Draft

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Conway, Darrel, J.

    2007-01-01

    Early in 2002, Goddard Space Flight Center (GSFC) began to identify requirements for the flight dynamics software needed to fly upcoming missions that use formations of spacecraft to collect data. These requirements ranged from low level modeling features to large scale interoperability requirements. In 2003 we began work on a system designed to meet these requirement; this system is GMAT. The General Mission Analysis Tool (GMAT) is a general purpose flight dynamics modeling tool built on open source principles. The GMAT code is written in C++, and uses modern C++ constructs extensively. GMAT can be run through either a fully functional Graphical User Interface (GUI) or as a command line program with minimal user feedback. The system is built and runs on Microsoft Windows, Linux, and Macintosh OS X platforms. The GMAT GUI is written using wxWidgets, a cross platform library of components that streamlines the development and extension of the user interface Flight dynamics modeling is performed in GMAT by building components that represent the players in the analysis problem that is being modeled. These components interact through the sequential execution of instructions, embodied in the GMAT Mission Sequence. A typical Mission Sequence will model the trajectories of a set of spacecraft evolving over time, calculating relevant parameters during this propagation, and maneuvering individual spacecraft to maintain a set of mission constraints as established by the mission analyst. All of the elements used in GMAT for mission analysis can be viewed in the GMAT GUI or through a custom scripting language. Analysis problems modeled in GMAT are saved as script files, and these files can be read into GMAT. When a script is read into the GMAT GUI, the corresponding user interface elements are constructed in the GMAT GUI. The GMAT system was developed from the ground up to run in a platform agnostic environment. The source code compiles on numerous different platforms, and is regularly exercised running on Windows, Linux and Macintosh computers by the development and analysis teams working on the project. The system can be run using either a graphical user interface, written using the open source wxWidgets framework, or from a text console. The GMAT source code was written using open source tools. GSFC has released the code using the NASA open source license.

  9. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  10. Molecular t-matrices for Low-Energy Electron Diffraction (TMOL v1.1)

    NASA Astrophysics Data System (ADS)

    Blanco-Rey, Maria; de Andres, Pedro; Held, Georg; King, David A.

    2004-08-01

    We describe a FORTRAN-90 program that computes scattering t-matrices for a molecule. These can be used in a Low-Energy Electron Diffraction program to solve the molecular structural problem very efficiently. The intramolecular multiple scattering is computed within a Dyson-like approach, using free space Green propagators in a basis of spherical waves. The advantage of this approach is related to exploiting the chemical identity of the molecule, and to the simplicity to translate and rotate these t-matrices without performing a new multiple-scattering calculation for each configuration. FORTRAN-90 routines for rotating the resulting t-matrices using Wigner matrices are also provided. Program summaryTitle of program: TMOL Catalogue number: ADUF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Computers: Alpha ev6-21264 (700 MHz) and Pentium-IV. Operating systems: Digital UNIX V5.0 and Linux (Red Hat 8.0). Programming language: FORTRAN-90/95 (Compaq True64 compiler, and Intel Fortran Compiler 7.0 for Linux). High-speed storage required for the test run: minimum 64 Mbytes, it can grow to more depending on the system considered. Disk storage required: None. No. of bits in a word: 64 and 32. No. of lines in distributed program, including test data etc.: 5404 No. of bytes in distributed program, including test data etc.: 59 856 Distribution format: tar.gz Nature of problem: We describe the FORTRAN-90 program TMOL (v1.1) for the computation of non-diagonal scattering t-matrices for molecules or any other poly-atomic sub-unit of surface structures. These matrices can be used in an standard Low-Energy Electron Diffraction program, such as LEED90 or CLEED. Method of solution: A general non-diagonal t-matrix is assumed for the atoms or more general scatterers forming the molecule. The molecular t-matrix is solved adding the possible intramolecular multiple scattering events using Green's propagator formalism. The resulting t-matrix is referred to the mass centre of the molecule and can be easily translated with these propagators and rotated applying Wigner matrices. Typical running time: Calculating the t-matrix for a single energy takes a few seconds. Time depends on the maximum angular momentum quantum number, lmax, and the number of scatterers in the molecule, N. Running time scales as lmax6 and N3. References: [1] S. Andersson, J.B. Pendry, J. Phys. C: Solid St. Phys. 13 (1980) 3547. [2] A. Gonis, W.H. Butler, Multiple Scattering in Solids, Springer-Verlag, Berlin/New York, 2000.

  11. Teaching Hands-On Linux Host Computer Security

    ERIC Educational Resources Information Center

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  12. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  13. TetrUSS Capabilities for S and C Applications

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Parikh, Paresh

    2004-01-01

    TetrUSS is a suite of loosely coupled computational fluid dynamics software that is packaged into a complete flow analysis system. The system components consist of tools for geometry setup, grid generation, flow solution, visualization, and various utilities tools. Development began in 1990 and it has evolved into a proven and stable system for Euler and Navier-Stokes analysis and design of unconventional configurations. It is 1) well developed and validated, 2) has a broad base of support, and 3) is presently is a workhorse code because of the level of confidence that has been established through wide use. The entire system can now run on linux or mac architectures. In the following slides, I will highlight more of the features of the VGRID and USM3D codes.

  14. PANGEA: pipeline for analysis of next generation amplicons

    PubMed Central

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz FW; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-01-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including preprocessing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the χ2 step, are joined into one program called the ‘backbone’. PMID:20182525

  15. PANGEA: pipeline for analysis of next generation amplicons.

    PubMed

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz F W; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-07-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including pre-processing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the chi(2) step, are joined into one program called the 'backbone'.

  16. Space Communications Emulation Facility

    NASA Technical Reports Server (NTRS)

    Hill, Chante A.

    2004-01-01

    Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.

  17. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  18. Raspberry Pi in-situ network monitoring system of groundwater flow and temperature integrated with OpenGeoSys

    NASA Astrophysics Data System (ADS)

    Park, Chan-Hee; Lee, Cholwoo

    2016-04-01

    Raspberry Pi series is a low cost, smaller than credit-card sized computers that various operating systems such as linux and recently even Windows 10 are ported to run on. Thanks to massive production and rapid technology development, the price of various sensors that can be attached to Raspberry Pi has been dropping at an increasing speed. Therefore, the device can be an economic choice as a small portable computer to monitor temporal hydrogeological data in fields. In this study, we present a Raspberry Pi system that measures a flow rate, and temperature of groundwater at sites, stores them into mysql database, and produces interactive figures and tables such as google charts online or bokeh offline for further monitoring and analysis. Since all the data are to be monitored on internet, any computers or mobile devices can be good monitoring tools at convenience. The measured data are further integrated with OpenGeoSys, one of the hydrogeological models that is also ported to the Raspberry Pi series. This leads onsite hydrogeological modeling fed by temporal sensor data to meet various needs.

  19. Simulation of two dimensional electrophoresis and tandem mass spectrometry for teaching proteomics.

    PubMed

    Fisher, Amanda; Sekera, Emily; Payne, Jill; Craig, Paul

    2012-01-01

    In proteomics, complex mixtures of proteins are separated (usually by chromatography or electrophoresis) and identified by mass spectrometry. We have created 2DE Tandem MS, a computer program designed for use in the biochemistry, proteomics, or bioinformatics classroom. It contains two simulations-2D electrophoresis and tandem mass spectrometry. The two simulations are integrated together and are designed to teach the concept of proteome analysis of prokaryotic and eukaryotic organisms. 2DE-Tandem MS can be used as a freestanding simulation, or in conjunction with a wet lab, to introduce proteomics in the undergraduate classroom. 2DE Tandem MS is a free program available on Sourceforge at https://sourceforge.net/projects/jbf/. It was developed using Java Swing and functions in Mac OSX, Windows, and Linux, ensuring that every student sees a consistent and informative graphical user interface no matter the computer platform they choose. Java must be installed on the host computer to run 2DE Tandem MS. Example classroom exercises are provided in the Supporting Information. Copyright © 2012 Wiley Periodicals, Inc.

  20. A Configuration Framework and Implementation for the Least Privilege Separation Kernel

    DTIC Science & Technology

    2010-12-01

    The Altova Web site states that virtualization software, Parallels for Mac and Wine , is required for running it on MacOS and RedHat Linux...University of Singapore Singapore 28. Tan Lai Poh National University of Singapore Singapore 29. Quek Chee Luan Defence Science & Technology Agency Singapore

  1. SpiceyPy, a Python Wrapper for SPICE

    NASA Astrophysics Data System (ADS)

    Annex, A.

    2017-06-01

    SpiceyPy is an open source Python wrapper for the NAIF SPICE toolkit. It is available for macOS, Linux, and Windows platforms and for Python versions 2.7.x and 3.x as well as Anaconda. SpiceyPy can be installed by running: “pip install spiceypy.”

  2. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  3. Checkpointing Shared Memory Programs at the Application-level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Schulz, M; Szwed, P

    2004-09-08

    Trends in high-performance computing are making it necessary for long-running applications to tolerate hardware faults. The most commonly used approach is checkpoint and restart(CPR)-the state of the computation is saved periodically on disk, and when a failure occurs, the computation is restarted from the last saved state. At present, it is the responsibility of the programmer to instrument applications for CPR. Our group is investigating the use of compiler technology to instrument codes to make them self-checkpointing and self-restarting, thereby providing an automatic solution to the problem of making long-running scientific applications resilient to hardware faults. Our previous work focusedmore » on message-passing programs. In this paper, we describe such a system for shared-memory programs running on symmetric multiprocessors. The system has two components: (i)a pre-compiler for source-to-source modification of applications, and (ii) a runtime system that implements a protocol for coordinating CPR among the threads of the parallel application. For the sake of concreteness, we focus on a non-trivial subset of OpenMP that includes barriers and locks. One of the advantages of this approach is that the ability to tolerate faults becomes embedded within the application itself, so applications become self-checkpointing and self-restarting on any platform. We demonstrate this by showing that our transformed benchmarks can checkpoint and restart on three different platforms (Windows/x86, Linux/x86, and Tru64/Alpha). Our experiments show that the overhead introduced by this approach is usually quite small; they also suggest ways in which the current implementation can be tuned to reduced overheads further.« less

  4. CHROMA: consensus-based colouring of multiple alignments for publication.

    PubMed

    Goodstadt, L; Ponting, C P

    2001-09-01

    CHROMA annotates multiple protein sequence alignments by consensus to produce formatted and coloured text suitable for incorporation into other documents for publication. The package is designed to be flexible and reliable, and has a simple-to-use graphical user interface running under Microsoft Windows. Both the executables and source code for CHROMA running under Windows and Linux (portable command-line only) are freely available at http://www.lg.ndirect.co.uk/chroma. Software enquiries should be directed to CHROMA@lg.ndirect.co.uk.

  5. Web Service Model for Plasma Simulations with Automatic Post Processing and Generation of Visual Diagnostics*

    NASA Astrophysics Data System (ADS)

    Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.

    2003-10-01

    We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.

  6. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  7. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  8. Real-time autocorrelator for fluorescence correlation spectroscopy based on graphical-processor-unit architecture: method, implementation, and comparative studies

    NASA Astrophysics Data System (ADS)

    Laracuente, Nicholas; Grossman, Carl

    2013-03-01

    We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College

  9. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tock, Yoav; Mandler, Benjamin; Moreira, Jose

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less

  10. Multichannel Networked Phasemeter Readout and Analysis

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    Netmeter software reads a data stream from up to 250 networked phasemeters, synchronizes the data, saves the reduced data to disk (after applying a low-pass filter), and provides a Web server interface for remote control. Unlike older phasemeter software that requires a special, real-time operating system, this program can run on any general-purpose computer. It needs about five percent of the CPU (central processing unit) to process 20 channels because it adds built-in data logging and network-based GUIs (graphical user interfaces) that are implemented in Scalable Vector Graphics (SVG). Netmeter runs on Linux and Windows. It displays the instantaneous displacements measured by several phasemeters at a user-selectable rate, up to 1 kHz. The program monitors the measure and reference channel frequencies. For ease of use, levels of status in Netmeter are color coded: green for normal operation, yellow for network errors, and red for optical misalignment problems. Netmeter includes user-selectable filters up to 4 k samples, and user-selectable averaging windows (after filtering). Before filtering, the program saves raw data to disk using a burst-write technique.

  11. Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy

    PubMed Central

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-01-01

    Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313

  12. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  13. a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.

  14. A blueprint for computational analysis of acoustical scattering from orchestral panel arrays

    NASA Astrophysics Data System (ADS)

    Burns, Thomas

    2005-09-01

    Orchestral panel arrays have been a topic of interest to acousticians, and it is reasonable to expect optimal design criteria to result from a combination of musician surveys, on-stage empirical data, and computational modeling of various configurations. Preparing a musicians survey to identify specific mechanisms of perception and sound quality is best suited for a clinically experienced hearing scientist. Measuring acoustical scattering from a panel array and discerning the effects from various boundaries is best suited for the experienced researcher in engineering acoustics. Analyzing a numerical model of the panel arrays is best suited for the tools typically used in computational engineering analysis. Toward this end, a streamlined process will be described using PROENGINEER to define a panel array geometry in 3-D, a commercial mesher to numerically discretize this geometry, SYSNOISE to solve the associated boundary element integral equations, and MATLAB to visualize the results. The model was run (background priority) on an SGI Altix (Linux) server with 12 CPUs, 24 Gbytes of RAM, and 1 Tbyte of disk space. These computational resources are available to research teams interested in this topic and willing to write and pursue grants.

  15. Computing with Beowulf

    NASA Technical Reports Server (NTRS)

    Cohen, Jarrett

    1999-01-01

    Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.

  16. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  17. Fast 2D FWI on a multi and many-cores workstation.

    NASA Astrophysics Data System (ADS)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.

  18. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  19. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  20. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  1. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  2. A Web-Server of Cell Type Discrimination System

    PubMed Central

    Zhong, Yan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634

  3. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  4. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less

  5. Precision studies of the NNLO DGLAP evolution at the LHC with Candia

    NASA Astrophysics Data System (ADS)

    Cafarella, Alessandro; Corianò, Claudio; Guzzi, Marco

    2008-11-01

    We summarize the theoretical approach to the solution of the NNLO DGLAP equations using methods based on the logarithmic expansions in x-space and their implementation into the C program CANDIA 1.0. We present the various options implemented in the program and discuss the different solutions. The user can choose the order of the evolution, the type of the solution, which can be either exact or truncated, and the evolution either with a fixed or a varying flavor number, implemented in the varying-flavor-number scheme (VFNS). The renormalization and factorization scale dependencies are treated separately. In the non-singlet sector the program implements an exact NNLO solution. Program summaryProgram title: CANDIA Catalogue identifier: AEBK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 376 No. of bytes in distributed program, including test data, etc.: 5 865 234 Distribution format: tar.gz Programming language: C and Fortran Computer: All Operating system: Linux RAM: In the given examples, it ranges from 4 to 490 MB Classification: 11.1, 11.5 Nature of problem: The program provided here solves the DGLAP evolution equations for the parton distribution functions up to NNLO. Solution method: The algorithm implemented is based on the theory of the logarithmic expansions in Bjorken x-space. Additional comments: To be sure of getting the latest version of the program, the authors suggest downloading the code from their official CANDIA website ( http://www.le.infn.it/candia). Running time: In the given examples, it ranges from 1 to 40 minutes. The jobs have been executed on an Intel Core 2 Duo T7250 CPU at 2 GHz with a 64 bit Linux kernel. The test run script included in the package contains 5 sample runs and may take a number of hours to process, depending on the speed of the processor used and the size of the available RAM. http://www.le.infn.it/candia.

  6. Setting Up the JBrowse Genome Browser

    PubMed Central

    Skinner, Mitchell E; Holmes, Ian H

    2010-01-01

    JBrowse is a web-based tool for visualizing genomic data. Unlike most other web-based genome browsers, JBrowse exploits the capabilities of the user's web browser to make scrolling and zooming fast and smooth. It supports the browsers used by almost all internet users, and is relatively simple to install. JBrowse can utilize multiple types of data in a variety of common genomic data formats, including genomic feature data in bioperl databases, GFF files, and BED files, and quantitative data in wiggle files. This unit describes how to obtain the JBrowse software, set it up on a Linux or Mac OS X computer running as a web server and incorporate genome annotation data from multiple sources into JBrowse. After completing the protocols described in this unit, the reader will have a web site that other users can visit to browse the genomic data. PMID:21154710

  7. Industrial applications of high-performance computing for phylogeny reconstruction

    NASA Astrophysics Data System (ADS)

    Bader, David A.; Moret, Bernard M.; Vawter, Lisa

    2001-07-01

    Phylogenies (that is, tree-of-life relationships) derived from gene order data may prove crucial in answering some fundamental open questions in biomolecular evolution. Real-world interest is strong in determining these relationships. For example, pharmaceutical companies may use phylogeny reconstruction in drug discovery for discovering synthetic pathways unique to organisms that they wish to target. Health organizations study the phylogenies of organisms such as HIV in order to understand their epidemiologies and to aid in predicting the behaviors of future outbreaks. And governments are interested in aiding the production of such foodstuffs as rice, wheat and potatoes via genetics through understanding of the phylogenetic distribution of genetic variation in wild populations. Yet few techniques are available for difficult phylogenetic reconstruction problems. Appropriate tools for analysis of such data may aid in resolving some of the phylogenetic problems that have been analyzed without much resolution for decades. With the rapid accumulation of whole genome sequences for a wide diversity of taxa, especially microbial taxa, phylogenetic reconstruction based on changes in gene order and gene content is showing promise, particularly for resolving deep (i.e., ancient) branch splits. However, reconstruction from gene-order data is even more computationally expensive than reconstruction from sequence data, particularly in groups with large numbers of genes and highly-rearranged genomes. We have developed a software suite, GRAPPA, that extends the breakpoint analysis (BPAnalysis) method of Sankoff and Blanchette while running much faster: in a recent analysis of chloroplast genome data for species of Campanulaceae on a 512-processor Linux supercluster with Myrinet, we achieved a one-million-fold speedup over BPAnalysis. GRAPPA can use either breakpoint or inversion distance (computed exactly) for its computation and runs on single-processor machines as well as parallel and high-performance computers.

  8. MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science

    NASA Astrophysics Data System (ADS)

    Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke

    2011-12-01

    We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.

  9. HEP Computing

    Science.gov Websites

    Service Request Password Help New Users Back to HEP Computing Mail-Migration Procedure on Linux Mail -Migration Procedure on Windows How to Migrate a Folder to GMail using Pine U.S. Department of Energy The

  10. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  11. Web Services Provide Access to SCEC Scientific Research Application Software

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.

    2003-12-01

    Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.

  12. Fast and Sensitive Alignment of Microbial Whole Genome Sequencing Reads to Large Sequence Datasets on a Desktop PC: Application to Metagenomic Datasets and Pathogen Identification

    PubMed Central

    2014-01-01

    Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800

  13. SpecPad: device-independent NMR data visualization and processing based on the novel DART programming language and Html5 Web technology.

    PubMed

    Guigas, Bruno

    2017-09-01

    SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Fast and sensitive alignment of microbial whole genome sequencing reads to large sequence datasets on a desktop PC: application to metagenomic datasets and pathogen identification.

    PubMed

    Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs

    2014-01-01

    Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.

  15. Reduze - Feynman integral reduction in C++

    NASA Astrophysics Data System (ADS)

    Studerus, C.

    2010-07-01

    Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.

  16. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    PubMed

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.

  18. WImpiBLAST: Web Interface for mpiBLAST to Help Biologists Perform Large-Scale Annotation Using High Performance Computing

    PubMed Central

    Sharma, Parichit; Mantri, Shrikant S.

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410

  19. Line-by-line spectroscopic simulations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C++ 2005 with Cygwin 1.5.24 under Windows XP. RAM: 1 gigabyte Classification: 21.2 External routines: OpenGL ( http://www.opengl.org) Nature of problem: Simulating radiative transfer on high-temperature high-pressure gases. Solution method: Line-by-line Monte-Carlo ray-tracing. Unusual features: Parallel computations are moved to the GPU. Additional comments: nVidia GeForce 7000 or ATI Radeon X1000 series graphics processing unit is required. Running time: A few minutes.

  20. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  1. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  2. 2DRMP: A suite of two-dimensional R-matrix propagation codes

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; Scott, M. P.; Burke, P. G.; Stitt, T.; Faro-Maza, V.; Denis, C.; Maniopoulou, A.

    2009-12-01

    The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Program summaryProgram title: 2DRMP Catalogue identifier: AEEA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 196 717 No. of bytes in distributed program, including test data, etc.: 3 819 727 Distribution format: tar.gz Programming language: Fortran 95, MPI Computer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3] Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3] Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test run Classification: 2.4 External routines: BLAS, LAPACK, PBLAS, ScaLAPACK Subprograms used: ADAZ_v1_1 Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Solution method: Two-dimensional R-matrix propagation theory. The (r,r) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available. Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions. Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files. Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s). References:HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009. HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009. Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.

  3. Remotely Accessible Testbed for Software Defined Radio Development

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Lang, Minh; Peters, Kenneth J.; Taylor, Gregory H.

    2012-01-01

    Previous development testbeds have assumed that the developer was physically present in front of the hardware being used. No provision for remote operation of basic functions (power on/off or reset) was made, because the developer/operator was sitting in front of the hardware, and could just push the button manually. In this innovation, a completely remotely accessible testbed has been created, with all diagnostic equipment and tools set up for remote access, and using standardized interfaces so that failed equipment can be quickly replaced. In this testbed, over 95% of the operating hours were used for testing without the developer being physically present. The testbed includes a pair of personal computers, one running Linux and one running Windows. A variety of peripherals is connected via Ethernet and USB (universal serial bus) interfaces. A private internal Ethernet is used to connect to test instruments and other devices, so that the sole connection to the outside world is via the two PCs. An important design consideration was that all of the instruments and interfaces used stable, long-lived industry standards, such as Ethernet, USB, and GPIB (general purpose interface bus). There are no plug-in cards for the two PCs, so there are no problems with finding replacement computers with matching interfaces, device drivers, and installation. The only thing unique to the two PCs is the locally developed software, which is not specific to computer or operating system version. If a device (including one of the computers) were to fail or become unavailable (e.g., a test instrument needed to be recalibrated), replacing it is a straightforward process with a standard, off-the-shelf device.

  4. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  5. xQTL workbench: a scalable web environment for multi-level QTL analysis.

    PubMed

    Arends, Danny; van der Velde, K Joeri; Prins, Pjotr; Broman, Karl W; Möller, Steffen; Jansen, Ritsert C; Swertz, Morris A

    2012-04-01

    xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. m.a.swertz@rug.nl.

  6. xQTL workbench: a scalable web environment for multi-level QTL analysis

    PubMed Central

    Arends, Danny; van der Velde, K. Joeri; Prins, Pjotr; Broman, Karl W.; Möller, Steffen; Jansen, Ritsert C.; Swertz, Morris A.

    2012-01-01

    Summary: xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. Availability: xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. Contact: m.a.swertz@rug.nl PMID:22308096

  7. Mapping RNA-seq Reads with STAR

    PubMed Central

    Dobin, Alexander; Gingeras, Thomas R.

    2015-01-01

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920

  8. Mapping RNA-seq Reads with STAR.

    PubMed

    Dobin, Alexander; Gingeras, Thomas R

    2015-09-03

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.

  9. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A

    2017-10-01

    Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.

  11. permGPU: Using graphics processing units in RNA microarray association studies.

    PubMed

    Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros

    2010-06-16

    Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  12. Software for Displaying Data from Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert

    2003-01-01

    Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.

  13. The orbifolder: A tool to study the low-energy effective theory of heterotic orbifolds

    NASA Astrophysics Data System (ADS)

    Nilles, H. P.; Ramos-Sánchez, S.; Vaudrevange, P. K. S.; Wingerter, A.

    2012-06-01

    The orbifolder is a program developed in C++ that computes and analyzes the low-energy effective theory of heterotic orbifold compactifications. The program includes routines to compute the massless spectrum, to identify the allowed couplings in the superpotential, to automatically generate large sets of orbifold models, to identify phenomenologically interesting models (e.g. MSSM-like models) and to analyze their vacuum configurations. Program summaryProgram title: orbifolder Catalogue identifier: AELR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 145 572 No. of bytes in distributed program, including test data, etc.: 930 517 Distribution format: tar.gz Programming language:C++ Computer: Personal computer Operating system: Tested on Linux (Fedora 15, Ubuntu 11, SuSE 11) Word size: 32 bits or 64 bits Classification: 11.1 External routines: Boost (http://www.boost.org/), GSL (http://www.gnu.org/software/gsl/) Nature of problem: Calculating the low-energy spectrum of heterotic orbifold compactifications. Solution method: Quadratic equations on a lattice; representation theory; polynomial algebra. Running time: Less than a second per model.

  14. Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.

    PubMed

    Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M

    2015-11-01

    Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.

  15. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  16. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  17. Validation of CFD/Heat Transfer Software for Turbine Blade Analysis

    NASA Technical Reports Server (NTRS)

    Kiefer, Walter D.

    2004-01-01

    I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.

  18. Climateprediction.com: Public Involvement, Multi-Million Member Ensembles and Systematic Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.

    2001-12-01

    The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.

  19. SLHAplus: A library for implementing extensions of the standard model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.

    2011-03-01

    We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec

  20. Aozan: an automated post-sequencing data-processing pipeline.

    PubMed

    Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent

    2017-07-15

    Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  2. xPerm: fast index canonicalization for tensor computer algebra

    NASA Astrophysics Data System (ADS)

    Martín-García, José M.

    2008-10-01

    We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.

  3. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI. Number of processors used: 1-50,000 RAM: Memory requirements depend on the grid size Classification: 6.5, 7.7. External routines: MPI (http://www.mcs.anl.gov/research/projects/mpi/), HDF5 (http://www.hdfgroup.org/HDF5/) Nature of problem: Dendritic growth in undercooled Al-3 wt% Cu alloy melt under forced convection. Solution method: The lattice Boltzmann model solves the diffusion, convection, and heat transfer phenomena. The cellular automaton technique is deployed to track the solid/liquid interface. Restrictions: Heat transfer is calculated uncoupled from the fluid flow. Thermal diffusivity is constant. Unusual features: Novel technique, utilizing periodic duplication of a pre-grown “incubation” domain, is applied for the scaleup test. Running time: Running time varies from minutes to days depending on the domain size and number of computational cores.

  4. MSTor: A program for calculating partition functions, free energies, enthalpies, entropies, and heat capacities of complex molecules including torsional anharmonicity

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.

    2012-08-01

    We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.

  5. A database for coconut crop improvement.

    PubMed

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-12-08

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.

  6. Multi-Target Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria

    2005-01-01

    This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.

  7. Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.; Schulz, M.

    2010-04-01

    We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.

  8. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    PubMed

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  9. QDENSITY—A Mathematica quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank

    2009-03-01

    This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.

  10. Commodity Cluster Computing for Remote Sensing Applications using Red Hat LINUX

    NASA Technical Reports Server (NTRS)

    Dorband, John

    2003-01-01

    Since 1994, we have been doing research at Goddard Space Flight Center on implementing a wide variety of applications on commodity based computing clusters. This talk is about these clusters and haw they are used on these applications including ones for remote sensing.

  11. Automated evaluation of matrix elements between contracted wavefunctions: A Mathematica version of the FRODO program

    NASA Astrophysics Data System (ADS)

    Angeli, C.; Cimiraglia, R.

    2013-02-01

    A symbolic program performing the Formal Reduction of Density Operators (FRODO), formerly developed in the MuPAD computer algebra system with the purpose of evaluating the matrix elements of the electronic Hamiltonian between internally contracted functions in a complete active space (CAS) scheme, has been rewritten in Mathematica. New version : A program summaryProgram title: FRODO Catalogue identifier: ADV Y _v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVY_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3878 No. of bytes in distributed program, including test data, etc.: 170729 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which the Mathematica computer algebra system can be installed Operating system: Linux Classification: 5 Catalogue identifier of previous version: ADV Y _v1_0 Journal reference of previous version: Comput. Phys. Comm. 171(2005)63 Does the new version supersede the previous version?: No Nature of problem. In order to improve on the CAS-SCF wavefunction one can resort to multireference perturbation theory or configuration interaction based on internally contracted functions (ICFs) which are obtained by application of the excitation operators to the reference CAS-SCF wavefunction. The previous formulation of such matrix elements in the MuPAD computer algebra system, has been rewritten using Mathematica. Solution method: The method adopted consists in successively eliminating all occurrences of inactive orbital indices (core and virtual) from the products of excitation operators which appear in the definition of the ICFs and in the electronic Hamiltonian expressed in the second quantization formalism. Reasons for new version: Some years ago we published in this journal a couple of papers [1, 2] hereafter to be referred to as papers I and II, respectively dedicated to the automated evaluation of the matrix elements of the molecular electronic Hamiltonian between internally contracted functions [3] (ICFs). In paper II the program FRODO (after Formal Reduction Of Density Operators) was presented with the purpose of providing working formulas for each occurrence of the ICFs. The original FRODO program was written in the MuPAD computer algebra system [4] and was actively used in our group for the generation of the matrix elements to be employed in the third-order n-electron valence state perturbation theory (NEVPT) [5-8] as well as in the internally contracted configuration interaction (IC-CI) [9]. We present a new version of the program FRODO written in the Mathematica system [10]. The reason for the rewriting of the program lies in the fact that, on the one hand, MuPAD does not seem to be any longer available as a stand-alone system and, on the other hand, Mathematica, due to its ubiquitousness, appears to be increasingly the computer algebra system most widely used nowadays. Restrictions: The program is limited to no more than doubly excited ICFs. Running time: The examples described in the Readme file take a few seconds to run. References: [1] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 166 (2005) 53. [2] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 171 (2005) 63. [3] H.-J. Werner, P. J. Knowles, Adv. Chem. Phys. 89 (1988) 5803. [4] B. Fuchssteiner, W. Oevel: http://www.mupad.de Mupad research group, university of Paderborn. Mupad version 2.5.3 for Linux. [5] C. Angeli, R. Cimiraglia, S. Evangelisti, T. Leininger, J.-P. Malrieu, J. Chem. Phys. 114 (2001) 10252. [6] C. Angeli, R. Cimiraglia, J.-P. Malrieu, J. Chem. Phys. 117 (2002) 9138. [7] C. Angeli, B. Bories, A. Cavallini, R. Cimiraglia, J. Chem. Phys. 124 (2006) 054108. [8] C. Angeli, M. Pastore, R. Cimiraglia, Theor. Chem. Acc. 117 (2007) 743. [9] C. Angeli, R. Cimiraglia, Mol. Phys. in press, DOI:10.1080/00268976.2012.689872 [10] http://www.wolfram.com/Mathematica. Mathematica version 8 for Linux.

  12. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  13. Improved Distance Learning Environment For Marine Forces Reserve

    DTIC Science & Technology

    2016-09-01

    keyboard, to 20 form a desktop computer . Laptop computers share similar components but add mobility to the user. If additional desktop computers ...for stationary computing devices such as desktop PCs and laptops include the Microsoft Windows, Mac OS, and Linux families of OSs 44 (Hopkins...opportunities to all Marines. For active duty Marines, government-provided desktops and laptops (GPDLs) typically support DL T&E or learning resource

  14. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  15. Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.

    PubMed

    Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M

    2015-01-01

    The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.

  16. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    NASA Astrophysics Data System (ADS)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  17. A database for coconut crop improvement

    PubMed Central

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-01-01

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. Availability http://www.bioinfcpcri.org PMID:17597858

  18. Cyber Fundamental Exercises

    DTIC Science & Technology

    2013-03-01

    the /bin, /sbin, /etc, /var/log, /home, /proc, /root, /dev, /tmp, and /lib directories • Describe the purpose of the /etc/shadow and /etc/ passwd ...UNLIMITED 19 2.6.2 /etc/ passwd and /etc/shadow The /etc/shadow file didn’t exist on early Linux distributions. Originally only root could access the...etc/ passwd file, which stored user names, user configuration information, and passwords. However, when common programs such as ls running under

  19. Agent-Based Framework for Discrete Entity Simulations

    DTIC Science & Technology

    2006-11-01

    Postgres database server for environment queries of neighbors and continuum data. As expected for raw database queries (no database optimizations in...form. Eventually the code was ported to GNU C++ on the same single Intel Pentium 4 CPU running RedHat Linux 9.0 and Postgres database server...Again Postgres was used for environmental queries, and the tool remained relatively slow because of the immense number of queries necessary to assess

  20. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  1. An upgraded version of the generator BCVEGPY2.0 for hadronic production of B meson and its excited states

    NASA Astrophysics Data System (ADS)

    Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang

    2006-11-01

    An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1

  2. CADNA_C: A version of CADNA for use with C or C++ programs

    NASA Astrophysics Data System (ADS)

    Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne

    2010-11-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  3. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  4. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  5. Neuronify: An Educational Simulator for Neural Circuits.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T

    2017-01-01

    Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).

  6. In silico reconstitution of Listeria propulsion exhibits nano-saltation.

    PubMed

    Alberts, Jonathan B; Odell, Garrett M

    2004-12-01

    To understand how the actin-polymerization-mediated movements in cells emerge from myriad individual protein-protein interactions, we developed a computational model of Listeria monocytogenes propulsion that explicitly simulates a large number of monomer-scale biochemical and mechanical interactions. The literature on actin networks and L. monocytogenes motility provides the foundation for a realistic mathematical/computer simulation, because most of the key rate constants governing actin network dynamics have been measured. We use a cluster of 80 Linux processors and our own suite of simulation and analysis software to characterize salient features of bacterial motion. Our "in silico reconstitution" produces qualitatively realistic bacterial motion with regard to speed and persistence of motion and actin tail morphology. The model also produces smaller scale emergent behavior; we demonstrate how the observed nano-saltatory motion of L. monocytogenes,in which runs punctuate pauses, can emerge from a cooperative binding and breaking of attachments between actin filaments and the bacterium. We describe our modeling methodology in detail, as it is likely to be useful for understanding any subcellular system in which the dynamics of many simple interactions lead to complex emergent behavior, e.g., lamellipodia and filopodia extension, cellular organization, and cytokinesis.

  7. Neuronify: An Educational Simulator for Neural Circuits

    PubMed Central

    Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne

    2017-01-01

    Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440

  8. Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner

    NASA Astrophysics Data System (ADS)

    Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.

    2007-02-01

    In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.

  9. Phyx: phylogenetic tools for unix.

    PubMed

    Brown, Joseph W; Walker, Joseph F; Smith, Stephen A

    2017-06-15

    The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx : a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx. eebsmith@umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  10. Phyx: phylogenetic tools for unix

    PubMed Central

    Brown, Joseph W.; Walker, Joseph F.; Smith, Stephen A.

    2017-01-01

    Abstract Summary: The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx: a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. Availability and Implementation: phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx Contact: eebsmith@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28174903

  11. Healthwatch-2 System Overview

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Mosher, Marianne; Huff, Edward M.

    2004-01-01

    Healthwatch-2 (HW-2) is a research tool designed to facilitate the development and testing of in-flight health monitoring algorithms. HW-2 software is written in C/C++ and executes on an x86-based computer running the Linux operating system. The executive module has interfaces for collecting various signal data, such as vibration, torque, tachometer, and GPS. It is designed to perform in-flight time or frequency averaging based on specifications defined in a user-supplied configuration file. Averaged data are then passed to a user-supplied algorithm written as a Matlab function. This allows researchers a convenient method for testing in-flight algorithms. In addition to its in-flight capabilities, HW-2 software is also capable of reading archived flight data and processing it as if collected in-flight. This allows algorithms to be developed and tested in the laboratory before being flown. Currently HW-2 has passed its checkout phase and is collecting data on a Bell OH-58C helicopter operated by the U.S. Army at NASA Ames Research Center.

  12. AirShow 1.0 CFD Software Users' Guide

    NASA Technical Reports Server (NTRS)

    Mohler, Stanley R., Jr.

    2005-01-01

    AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.

  13. NBodyLab Simulation Experiments with GRAPE-6a AND MD-GRAPE2 Acceleration

    NASA Astrophysics Data System (ADS)

    Johnson, V.; Ates, A.

    2005-12-01

    NbodyLab is an astrophysical N-body simulation testbed for student research. It is accessible via a web interface and runs as a backend framework under Linux. NbodyLab can generate data models or perform star catalog lookups, transform input data sets, perform direct summation gravitational force calculations using a variety of integration schemes, and produce analysis and visualization output products. NEMO (Teuben 1994), a popular stellar dynamics toolbox, is used for some functions. NbodyLab integrators can optionally utilize two types of low-cost desktop supercomputer accelerators, the newly available GRAPE-6a (125 Gflops peak) and the MD-GRAPE2 (64-128 Gflops peak). The initial version of NBodyLab was presented at ADASS 2002. This paper summarizes software enhancements developed subsequently, focusing on GRAPE-6a related enhancements, and gives examples of computational experiments and astrophysical research, including star cluster and solar system studies, that can be conducted with the new testbed functionality.

  14. DSISoft—a MATLAB VSP data processing package

    NASA Astrophysics Data System (ADS)

    Beaty, K. S.; Perron, G.; Kay, I.; Adam, E.

    2002-05-01

    DSISoft is a public domain vertical seismic profile processing software package developed at the Geological Survey of Canada. DSISoft runs under MATLAB version 5.0 and above and hence is portable between computer operating systems supported by MATLAB (i.e. Unix, Windows, Macintosh, Linux). The package includes processing modules for reading and writing various standard seismic data formats, performing data editing, sorting, filtering, and other basic processing modules. The processing sequence can be scripted allowing batch processing and easy documentation. A structured format has been developed to ensure future additions to the package are compatible with existing modules. Interactive modules have been created using MATLAB's graphical user interface builder for displaying seismic data, picking first break times, examining frequency spectra, doing f- k filtering, and plotting the trace header information. DSISoft modular design facilitates the incorporation of new processing algorithms as they are developed. This paper gives an overview of the scope of the software and serves as a guide for the addition of new modules.

  15. A Comparison of Satellite Conjunction Analysis Screening Tools

    DTIC Science & Technology

    2011-09-01

    visualization tool. Version 13.1.4 for Linux was tested. The SOAP conjunction analysis function does not have the capacity to perform the large...was examined by SOAP to confirm the conjunction. STK Advanced CAT STK Advanced CAT (Conjunction Analysis Tools) is an add-on module for the STK ...run with each tool. When attempting to perform the seven day all vs all analysis with STK Advanced CAT, the program consistently crashed during report

  16. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  17. Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data

    DTIC Science & Technology

    2014-04-01

    annotate other ontologies for the visual interface client. Finally, we are actively working on software development of both a backend server and the...the following infrastructure and resources. For the development and management of the ontologies, we installed a framework consisting of a server...that is being developed by Google. Using these 9 technologies, we developed an HTML5 client that runs on Windows, Mac OSX, Linux and mobile systems

  18. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.; Jolly, Ronald L.

    2007-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/ Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in the article on page 8. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro- ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1.3-octave spectrograms.

  19. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.

    2005-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in "Predicting Rocket or Jet Noise in Real Time" (SSC-00215-1), which appears elsewhere in this issue of NASA Tech Briefs. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro-ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1/3-octave spectrograms.

  20. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  1. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  2. Connecting an Ocean-Bottom Broadband Seismometer to a Seafloor Cabled Observatory: A Prototype System in Monterey Bay

    NASA Astrophysics Data System (ADS)

    McGill, P.; Neuhauser, D.; Romanowicz, B.

    2008-12-01

    The Monterey Ocean-Bottom Broadband (MOBB) seismic station was installed in April 2003, 40 km offshore from the central coast of California at a seafloor depth of 1000 m. It comprises a three-component broadband seismometer system (Guralp CMG-1T), installed in a hollow PVC caisson and buried under the seafloor; a current meter; and a differential pressure gauge. The station has been operating continuously since installation with no connection to the shore. Three times each year, the station is serviced with the aid of a Remotely Operated Vehicle (ROV) to change the batteries and retrieve the seismic data. In February 2009, the MOBB system will be connected to the Monterey Accelerated Research System (MARS) seafloor cabled observatory. The NSF-funded MARS observatory comprises a 52 km electro-optical cable that extends from a shore facility in Moss Landing out to a seafloor node in Monterey Bay. Once installation is completed in November 2008, the node will provide power and data to as many as eight science experiments through underwater electrical connectors. The MOBB system is located 3 km from the MARS node, and the two will be connected with an extension cable installed by an ROV with the aid of a cable-laying toolsled. The electronics module in the MOBB system is being refurbished to support the connection to the MARS observatory. The low-power autonomous data logger has been replaced with a PC/104 computer stack running embedded Linux. This new computer will run an Object Ring Buffer (ORB), which will collect data from the various MOBB sensors and forward it to another ORB running on a computer at the MARS shore station. There, the data will be archived and then forwarded to a third ORB running at the UC Berkeley Seismological Laboratory. Timing will be synchronized among MOBB's multiple acquisition systems using NTP, GPS clock emulation, and a precise timing signal from the MARS cable. The connection to the MARS observatory will provide real-time access to the MOBB data and eliminate the need for frequent servicing visits. The new system uses off-the-shelf hardware and open-source software, and will serve as a prototype for future instruments connected to seafloor cabled observatories.

  3. A Business Case Study of Open Source Software

    DTIC Science & Technology

    2001-07-01

    LinuxPPC LinuxPPC www.linuxppc.com MandrakeSoft Linux -Mandrake www.linux-mandrake.com/ en / CLE Project CLE cle.linux.org.tw/CLE/e_index.shtml Red Hat... en Coyote Linux www2.vortech.net/coyte/coyte.htm MNIS www.mnis.fr Data-Portal www.data-portal.com Mr O’s Linux Emporium www.ouin.com DLX Linux www.wu...1998 1999 Year S h ip m en ts ( in m ill io n s) Source: IDC, 2000. Figure 11. Worldwide New Linux Shipments (Client and Server) 3.2.2 Market

  4. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  5. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  6. Plasma Interactions With Spacecraft (I)

    DTIC Science & Technology

    2009-04-01

    with the Windows, Red hat LINUX, and MacOS X environments. We wrote N2kScriptRunner, a C++ code that runs a Nascap-2k script outside of the Java ...console-based and with a Java interface), a stand alone program that reads and writes Nascap-2k database files. This program has proved invaluable...surface currents for DSX and prototyped it in Java . A description of the algorithm and the prototype implementation is in Section 3. 1.5. DSX

  7. Reduction of spectra exposed by the 700mm CCD camera of the Ondřejov telescope coudé spectrograph

    NASA Astrophysics Data System (ADS)

    Skoda, Petr; Slechta, Miroslav

    We present a brief cook-book for the reduction of spectra exposed by the Ondřejov 2-meter telescope coudé spectrograph. For the data reduction, we use standard IRAF packages running on Solaris and Linux. The sequence of commands is given for the typical reduction session together with short explanation and detailed list of parameter settings. The reduction progress is illustrated by example plots.

  8. Learn on the Fly: Quiescent Routing in Wireless Sensor Networks

    DTIC Science & Technology

    2005-02-01

    quality solely based on data traf- fic without employing beacons. Using a realistic sensor network traffic trace and an 802.11b testbed of 195 Stargates ...testbed of 195 Stargates [1] with 802.11b radios. For instance, we investigate the validity of geographic uniformity which is assumed in literature [19...Figure 1), we deploy 29 Stargates in a straight line, with a 45-meter separation between any two consecutive Stargates . The Stargates run Linux with

  9. LINUX, Virtualization, and the Cloud: A Hands-On Student Introductory Lab

    ERIC Educational Resources Information Center

    Serapiglia, Anthony

    2013-01-01

    Many students are entering Computer Science education with limited exposure to operating systems and applications other than those produced by Apple or Microsoft. This gap in familiarity with the Open Source community can quickly be bridged with a simple exercise that can also be used to strengthen two other important current computing concepts,…

  10. Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application

    NASA Astrophysics Data System (ADS)

    Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.

    2011-12-01

    EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.

  11. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  12. Automated symbolic calculations in nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Kröger, Martin; Hütter, Markus

    2010-12-01

    We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.

  13. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  14. Processable Data Making in the Remote Server Sent by Android Phone as a GIS Data Collecting Tool

    NASA Astrophysics Data System (ADS)

    Karaagac, Abdullah; Bostancı, Bulent

    2016-04-01

    Mobile technologies are improving and getting cheaper everyday. Not only smart phones are improved much but also new types of mobile applications and sensors come with the smart phone together. Maps and navigation applications one of the most popular types of applications on these types. Most of these applications uses location services including GNSS, Wi Fi, cellular data and beacon services. Although these coordinate precision not very high, it is appropriate for many applications to utilize. Android is a mobile operating system based on Linux Kernel. It is compatible for varies mobile devices like smart phones, tablets, smart TV's, wearable technologies etc. Android has large capability for application development by using the open source libraries and device sensors like gyroscope, GNSS etc. Android Studio is the most popular integrated development environment (IDE) for Android devices, mainly developing by Google. It had been announced on May 16, 2013 at Google I/O conference. Android Studio is built upon Gradle architecture which is written in Java language. SQLite is a relational database operating system which has so common usage for mobile devices. It developed by using C programming library. It is mostly used via embedding into a software or application. It supports many operating systems including Android. Remote servers can be in several forms from high complexity to simplicity. For this project we will use a open source quad core board computer named Raspberry Pi 2. This device includes 900 MHz ARMv7 compatible quad core CPU, VideoCore IV GPU and 1 GB RAM. Although Raspberry Pi 2's main operating system is Raspbian, we use Debian which are both Linux based operating systems. Raspberry is compatible for many programming language, however some languages are optimized for this device. These are Python, Java, C, C++, Ruby, Perl and Squeak Smalltalk. In this paper, a mobile application will be developed to send coordinate and string data to a SQL database embedded to a remote server. The application will run on Android Operating System running mobile phone. The application will get the location information from the GNSS and cellular data. The user will enter the other information individually. These information will send by clicking a button to remote server which runs SQLite. All these informations will be convertible to any type of measure like type of coordinates could be converted from WGS 84 to ITRF.

  15. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  16. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  17. A Multi-purpose Brain-Computer Interface Output Device

    PubMed Central

    Thompson, David E; Huggins, Jane E

    2012-01-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as standalone communication and control systems, rather than as interfaces to existing systems built for these purposes. While an individual communication and control system may be powerful or flexible, no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCIs could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e. without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems. PMID:22208120

  18. A multi-purpose brain-computer interface output device.

    PubMed

    Thompson, David E; Huggins, Jane E

    2011-10-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as stand-alone communication and control systems, rather than as interfaces to existing systems built for these purposes. An individual communication and control system may be powerful or flexible, but no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCls could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e., without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems.

  19. Sensitivity of surface meteorological analyses to observation networks

    NASA Astrophysics Data System (ADS)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  20. PGOPHER in the Classroom and the Laboratory

    NASA Astrophysics Data System (ADS)

    Western, Colin

    2015-06-01

    PGOPHER is a general purpose program for simulating and fitting rotational, vibrational and electronic spectra. As it uses a graphical user interface the basic operation is sufficiently straightforward to make it suitable for use in undergraduate practicals and computer based classes. This talk will present two experiments that have been in regular use by Bristol undergraduates for some years based on the analysis of infra-red spectra of cigarette smoke and, for more advanced students, visible and near ultra-violet spectra of a nitrogen discharge and a hydrocarbon flame. For all of these the rotational structure is analysed and used to explore ideas of bonding. The talk will discuss the requirements for the apparatus and the support required. Other ideas for other possible experiments and computer based exercises will also be presented, including a group exercise. The PGOPHER program is open source, and is available for Microsoft Windows, Apple Mac and Linux. It can be freely downloaded from the supporting website http://pgopher.chm.bris.ac.uk. The program does not require any installation process, so can be run on student's own machines or easily setup on classroom or laboratory computers. PGOPHER, a Program for Simulating Rotational, Vibrational and Electronic Structure, C. M. Western, University of Bristol, http://pgopher.chm.bris.ac.uk PGOPHER version 8.0, C M Western, 2014, University of Bristol Research Data Repository, doi:10.5523/bris.huflggvpcuc1zvliqed497r2

  1. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  2. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  3. Numericware i: Identical by State Matrix Calculator

    PubMed Central

    Kim, Bongsong; Beavis, William D

    2017-01-01

    We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375

  4. Setting up a Low-Cost Lab Management System for a Multi-Purpose Computing Laboratory Using Virtualisation Technology

    ERIC Educational Resources Information Center

    Mok, Heng Ngee; Lee, Yeow Leong; Tan, Wee Kiat

    2012-01-01

    This paper describes how a generic computer laboratory equipped with 52 workstations is set up for teaching IT-related courses and other general purpose usage. The authors have successfully constructed a lab management system based on decentralised, client-side software virtualisation technology using Linux and free software tools from VMware that…

  5. Computers in Libraries, 2000: Proceedings (15th, Washington, D.C., March 15-17, 2000).

    ERIC Educational Resources Information Center

    Nixon, Carol, Comp.; Burmood, Jennifer, Comp.

    Topics of the Proceedings of the 15th Annual Computers in Libraries Conference (March 15-17, 2000) include: Linux and open source software in an academic library; a Master Trainer Program; what educators need to know about multimedia and copyright; how super searchers find business information online; managing print costs; new technologies in wide…

  6. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  7. Linux Makes the Grade: An Open Source Solution That's Time Has Come

    ERIC Educational Resources Information Center

    Houston, Melissa

    2007-01-01

    In 2001, Indiana officials at the Department of Education were taking stock. The schools had an excellent network infrastructure and had installed significant numbers of computers for 1 million public school enrollees. Yet students were spending less than an hour a week on the computer. It was then that state officials knew each student needed a…

  8. Mobile Situational Awareness Tool: Unattended Ground Sensor-Based Remote Surveillance System

    DTIC Science & Technology

    2014-09-01

    into prototyped WSNs. In 2012, the Raspberry Pi , an SBC with an Arm-Processor running Gnu/Linux also designed for students and hobbyists, entered...the market selling for only $25 each [30]. The Raspberry Pi was the size of a credit card, had the ability to connect to a wide variety of...peripherals to include Wi-Fi adapters and cameras, and had enough processing power to play high-definition video [31]. The Raspberry Pi proved to be

  9. Learnable Models for Information Diffusion and its Associated User Behavior in Micro-blogosphere

    DTIC Science & Technology

    2012-08-30

    According to the work of Even-Dar and Shapira (2007), we recall the definition of the ba- sic voter model on network G. In the model, each node of G...reason as follows. We started with the K distinct initial nodes and all the other nodes were neutral in the beginning. Recall that we set the average time... memory , running under Linux. Learning to predict opinion share and detect anti-majority opinionists in social networks 29 7 Conclusion Unlike the popular

  10. Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.

    PubMed

    Parker, Nicolas J; Parker, Andrew G

    2008-04-18

    The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.

  11. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  12. Galaxy CloudMan: delivering cloud compute clusters.

    PubMed

    Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James

    2010-12-21

    Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.

  13. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    NASA Astrophysics Data System (ADS)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program, including test data, etc.: 120226886 Distribution format: tar.gz Programming language: Java, Fortran95. Computer: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler; Developed on 2.40 GHz Intel Core i3. Operating system: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler. RAM: Ranging from a few Mbytes to several Gbytes, depending on the input parameters. Classification: 1.3. External routines: jfreechart-1.0.14 [1] (free plotting library for java), j3d-jre-1.5.2 [2] (3D visualization). Nature of problem: Optical properties of cosmic dust aggregates. Solution method: Java application based on Mackowski and Mischenko's Superposition T-Matrix code. Restrictions: The program is designed for single processor systems. Additional comments: The distribution file for this program is over 120 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Ranging from few minutes to several hours, depending on the input parameters. References: [1] http://www.jfree.org/index.html [2] https://java3d.java.net/

  14. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov Websites

    and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac

  15. Using UAV's to Measure the Urban Boundary Layer

    NASA Astrophysics Data System (ADS)

    Jacob, R. L.; Sankaran, R.; Beckman, P. H.

    2015-12-01

    The urban boundary layer is one of the most poorly studied regions of the atmospheric boundary layer. Since a majority of the world's population now lives in urban areas, it is becoming a more important region to measure and model. The combination of relatively low-cost unmanned aerial vehicles and low-cost sensors can together provide a new instrument for measuring urban and other boundary layers. We have mounted a new sensor and compute platform called Waggle on an off-the-shelf XR8 octo-copter from 3DRobotics. Waggle consists of multiple sensors for measuring pressure, temperature and humidity as well as trace gases such as carbon monoxide, nitrogen dioxide, sulfur dioxide and ozone. A single board computer running Linux included in Waggle on the UAV allows in-situ processing and data storage. Communication of the data is through WiFi or 3G and the Waggle software can save the data in case communication is lost during flight. The flight pattern is a deliberately simple vertical ascent and descent over a fixed location to provide vertical profiles and so flights can be confined to urban parks, industrial areas or the footprint of a single rooftop. We will present results from test flights in urban and rural areas in and around Chicago.

  16. Interactive software tool to comprehend the calculation of optimal sequence alignments with dynamic programming.

    PubMed

    Ibarra, Ignacio L; Melo, Francisco

    2010-07-01

    Dynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications. The software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX. All inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.

  17. MESAFace, a graphical interface to analyze the MESA output

    NASA Astrophysics Data System (ADS)

    Giannotti, M.; Wise, M.; Mohammed, A.

    2013-04-01

    MESA (Modules for Experiments in Stellar Astrophysics) has become very popular among astrophysicists as a powerful and reliable code to simulate stellar evolution. Analyzing the output data thoroughly may, however, present some challenges and be rather time-consuming. Here we describe MESAFace, a graphical and dynamical interface which provides an intuitive, efficient and quick way to analyze the MESA output. Catalogue identifier: AEOQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOQ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19165 No. of bytes in distributed program, including test data, etc.: 6300592 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer capable of running Mathematica. Operating system: Any capable of running Mathematica. Tested on Linux, Mac, Windows XP, Windows 7. RAM: Recommended 2 Gigabytes or more. Supplementary material: Additional test data files are available. Classification: 1.7, 14. Nature of problem: Find a way to quickly and thoroughly analyze the output of a MESA run, including all the profiles, and have an efficient method to produce graphical representations of the data. Solution method: We created two scripts (to be run consecutively). The first one downloads all the data from a MESA run and organizes the profiles in order of age. All the files are saved as tables or arrays of tables which can then be accessed very quickly by Mathematica. The second script uses the Manipulate function to create a graphical interface which allows the user to choose what to plot from a set of menus and buttons. The information shown is updated in real time. The user can access very quickly all the data from the run under examination and visualize it with plots and tables. Unusual features: Moving the slides in certain regions may cause an error message. This happens when Mathematica is asked to read nonexistent data. The error message, however, disappears when the slides are moved back. This issue does not preclude the good functioning of the interface. Additional comments: The program uses the dynamical capabilities of Mathematica. When the program is opened, Mathematica prompts the user to “Enable Dynamics”. It is necessary to accept before proceeding. Running time: Depends on the size of the data downloaded, on where the data are stored (hard-drive or web), and on the speed of the computer or network connection. In general, downloading the data may take from a minute to several minutes. Loading directly from the web is slower. For example, downloading a 200 MB data folder (a total of 102 files) with a dual-core Intel laptop, P8700, 2 GB of RAM, at 2.53 GHz took about a minute from the hard-drive and about 23 min from the web (with a basic home wireless connection).

  18. g_contacts: Fast contact search in bio-molecular ensemble data

    NASA Astrophysics Data System (ADS)

    Blau, Christian; Grubmuller, Helmut

    2013-12-01

    Short-range interatomic interactions govern many bio-molecular processes. Therefore, identifying close interaction partners in ensemble data is an essential task in structural biology and computational biophysics. A contact search can be cast as a typical range search problem for which efficient algorithms have been developed. However, none of those has yet been adapted to the context of macromolecular ensembles, particularly in a molecular dynamics (MD) framework. Here a set-decomposition algorithm is implemented which detects all contacting atoms or residues in maximum O(Nlog(N)) run-time, in contrast to the O(N2) complexity of a brute-force approach. Catalogue identifier: AEQA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 8945 No. of bytes in distributed program, including test data, etc.: 981604 Distribution format: tar.gz Programming language: C99. Computer: PC. Operating system: Linux. RAM: ≈Size of input frame Classification: 3, 4.14. External routines: Gromacs 4.6[1] Nature of problem: Finding atoms or residues that are closer to one another than a given cut-off. Solution method: Excluding distant atoms from distance calculations by decomposing the given set of atoms into disjoint subsets. Running time:≤O(Nlog(N)) References: [1] S. Pronk, S. Pall, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J.C. Smith, P. M. Kasson, D. van der Spoel, B. Hess and Erik Lindahl, Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit, Bioinformatics 29 (7) (2013).

  19. SLAM, a Mathematica interface for SUSY spectrum generators

    NASA Astrophysics Data System (ADS)

    Marquard, Peter; Zerf, Nikolai

    2014-03-01

    We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.

  20. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  1. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  2. AESS: Accelerated Exact Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.

  3. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    NASA Astrophysics Data System (ADS)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

  4. A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools

    DTIC Science & Technology

    2015-07-14

    computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network

  5. Galaxy CloudMan: delivering cloud compute clusters

    PubMed Central

    2010-01-01

    Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983

  6. Network Penetration Testing and Research

    NASA Technical Reports Server (NTRS)

    Murphy, Brandon F.

    2013-01-01

    This paper will focus the on research and testing done on penetrating a network for security purposes. This research will provide the IT security office new methods of attacks across and against a company's network as well as introduce them to new platforms and software that can be used to better assist with protecting against such attacks. Throughout this paper testing and research has been done on two different Linux based operating systems, for attacking and compromising a Windows based host computer. Backtrack 5 and BlackBuntu (Linux based penetration testing operating systems) are two different "attacker'' computers that will attempt to plant viruses and or NASA USRP - Internship Final Report exploits on a host Windows 7 operating system, as well as try to retrieve information from the host. On each Linux OS (Backtrack 5 and BlackBuntu) there is penetration testing software which provides the necessary tools to create exploits that can compromise a windows system as well as other operating systems. This paper will focus on two main methods of deploying exploits 1 onto a host computer in order to retrieve information from a compromised system. One method of deployment for an exploit that was tested is known as a "social engineering" exploit. This type of method requires interaction from unsuspecting user. With this user interaction, a deployed exploit may allow a malicious user to gain access to the unsuspecting user's computer as well as the network that such computer is connected to. Due to more advance security setting and antivirus protection and detection, this method is easily identified and defended against. The second method of exploit deployment is the method mainly focused upon within this paper. This method required extensive research on the best way to compromise a security enabled protected network. Once a network has been compromised, then any and all devices connected to such network has the potential to be compromised as well. With a compromised network, computers and devices can be penetrated through deployed exploits. This paper will illustrate the research done to test ability to penetrate a network without user interaction, in order to retrieve personal information from a targeted host.

  7. XTALOPT: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Lonie, David C.; Zurek, Eva

    2011-02-01

    The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely available to the scientific community for use and collaboration under the GNU Public License. Running time: User dependent. The program runs until stopped by the user.

  8. Continuous optical monitoring of a near-shore sea-water column

    NASA Astrophysics Data System (ADS)

    Bensky, T. J.; Neff, B.

    2006-12-01

    Cal Poly San Luis Obispo runs the Central Coast Marine Sciences Center, south-facing, 1-km-long pier in San Luis Bay, on the west coast of California, midway between Los Angeles and San Fransisco. The facility is secure and dedicated to marine science research. We have constructed an automated optical profiling system that collects sunlight samples, in half-foot increments, from a 30 foot vertical column of sea-water below the pier. Our implementation lowers a high quality, optically pure fiber cable into the water at 30 minute intervals. Light collected by the submersed fiber aperture is routed to the pier surface where it is spectrally analyzed using an Ocean Optics HR2000 spectrometer. The spectrometer instantly yields the spectrum of the light collected at a given depth. The "spectrum" here is light intensity as a function of wavelength between 200 and 1100 nm in increments of 0.1 nm. Each dive of the instrument takes approximately 80 seconds, lowers the fiber from the surface to a depth of 30 feet, and yields approximately 60 spectra, each one taken at a such successively larger depth. A computer logs each spectra as a function of depth. From such data, we are able to extract total downward photon flux, quantify ocean color, and compute attenuation coefficients. The system is entirely autonomous, includes an integrated data-browser, and can be checked-on, or even controlled over the Internet, using a web-browser. Linux runs the computer, data is logged directly to a mySQL database for easy extraction, and a PHP-script ties the system together. Current work involves studying light-energy deposition trends and effects of surface action on downward photon flux. This work has been funded by the Office of Naval Research (ONR) and the California Central Coast Research Park Initiative (C3RP).

  9. Using SW4 for 3D Simulations of Earthquake Strong Ground Motions: Application to Near-Field Strong Motion, Building Response, Basin Edge Generated Waves and Earthquakes in the San Francisco Bay Are

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.

    2016-12-01

    Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.

  10. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  11. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  12. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  13. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    NASA Astrophysics Data System (ADS)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).

  14. Global 30m Height Above the Nearest Drainage

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Winsemius, Hessel; Schellekens, Jaap; Erickson, Tyler; Gao, Hongkai; Savenije, Hubert; van de Giesen, Nick

    2016-04-01

    Variability of the Earth surface is the primary characteristics affecting the flow of surface and subsurface water. Digital elevation models, usually represented as height maps above some well-defined vertical datum, are used a lot to compute hydrologic parameters such as local flow directions, drainage area, drainage network pattern, and many others. Usually, it requires a significant effort to derive these parameters at a global scale. One hydrological characteristic introduced in the last decade is Height Above the Nearest Drainage (HAND): a digital elevation model normalized using nearest drainage. This parameter has been shown to be useful for many hydrological and more general purpose applications, such as landscape hazard mapping, landform classification, remote sensing and rainfall-runoff modeling. One of the essential characteristics of HAND is its ability to capture heterogeneities in local environments, difficult to measure or model otherwise. While many applications of HAND were published in the academic literature, no studies analyze its variability on a global scale, especially, using higher resolution DEMs, such as the new, one arc-second (approximately 30m) resolution version of SRTM. In this work, we will present the first global version of HAND computed using a mosaic of two DEMS: 30m SRTM and Viewfinderpanorama DEM (90m). The lower resolution DEM was used to cover latitudes above 60 degrees north and below 56 degrees south where SRTM is not available. We compute HAND using the unmodified version of the input DEMs to ensure consistency with the original elevation model. We have parallelized processing by generating a homogenized, equal-area version of HydroBASINS catchments. The resulting catchment boundaries were used to perform processing using 30m resolution DEM. To compute HAND, a new version of D8 local drainage directions as well as flow accumulation were calculated. The latter was used to estimate river head by incorporating fixed and variable thresholding methods. The resulting HAND dataset was analyzed regarding its spatial variability and to assess the global distribution of the main landform types: valley, ecotone, slope, and plateau. The method used to compute HAND was implemented using PCRaster software, running on Google Compute Engine platform running under Ubuntu Linux. The Google Earth Engine was used to perform mosaicing and clipping of the original DEMs as well as to provide access to the final product. The effort took about three months of computing time on eight core CPU virtual machine.

  15. GPS Auto-Navigation Design for Unmanned Air Vehicles

    NASA Technical Reports Server (NTRS)

    Nilsson, Caroline C. A.; Heinzen, Stearns N.; Hall, Charles E., Jr.; Chokani, Ndaona

    2003-01-01

    A GPS auto-navigation system is designed for Unmanned Air Vehicles. The objective is to enable the air vehicle to be used as a test-bed for novel flow control concepts. The navigation system uses pre-programmed GPS waypoints. The actual GPS position, heading, and velocity are collected by the flight computer, a PC104 system running in Real-Time Linux, and compared with the desired waypoint. The navigator then determines the necessity of a heading correction and outputs the correction in the form of a commanded bank angle, for a level coordinated turn, to the controller system. This controller system consists of 5 controller! (pitch rate PID, yaw damper, bank angle PID, velocity hold, and altitude hold) designed for a closed loop non-linear aircraft model with linear aerodynamic coefficients. The ability and accuracy of using GPS data, is validated by a GPS flight. The autopilots are also validated in flight. The autopilot unit flight validations show that the designed autopilots function as designed. The aircraft model, generated on Matlab SIMULINK is also enhanced by the flight data to accurately represent the actual aircraft.

  16. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    NASA Astrophysics Data System (ADS)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  17. ParDRe: faster parallel duplicated reads removal tool for sequencing studies.

    PubMed

    González-Domínguez, Jorge; Schmidt, Bertil

    2016-05-15

    Current next generation sequencing technologies often generate duplicated or near-duplicated reads that (depending on the application scenario) do not provide any interesting biological information but can increase memory requirements and computational time of downstream analysis. In this work we present ParDRe, a de novo parallel tool to remove duplicated and near-duplicated reads through the clustering of Single-End or Paired-End sequences from fasta or fastq files. It uses a novel bitwise approach to compare the suffixes of DNA strings and employs hybrid MPI/multithreading to reduce runtime on multicore systems. We show that ParDRe is up to 27.29 times faster than Fulcrum (a representative state-of-the-art tool) on a platform with two 8-core Sandy-Bridge processors. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/pardre/ jgonzalezd@udc.es. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas

    NASA Astrophysics Data System (ADS)

    Gudehus, D. H.

    2001-05-01

    A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.

  19. Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo

    The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.

  20. MATISSE-v1.5 and MATISSE-v2.0: new developments and comparison with MIRAMER measurements

    NASA Astrophysics Data System (ADS)

    Simoneau, Pierre; Caillault, Karine; Fauqueux, Sandrine; Huet, Thierry; Labarre, Luc; Malherbe, Claire; Rosier, Bernard

    2009-05-01

    MATISSE is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiatives quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). The spectral bandwidth ranges from 0.4 to 14 μm. Natural backgrounds include atmosphere (taking into account spatial variability), low and high altitude clouds, sea and land. The current version MATISSE-v1.5 can be run on SUN and IBM workstations as well as on PC under Windows and Linux environment. An IHM developed under Java environment is also implemented. MATISSE-v2.0 recovers all the MATISSE-v1.5 functionalities, and includes a new sea surface radiance model depending on wind speed, wind direction and the fetch value. The release of this new version in planned for April 2009. This paper gives a description of MATISSE-v1.5 and MATISSE-v2.0 and shows preliminary comparison results between generated images and measured images during the MIRAMER campaign, which hold in May 2008 in the Mediterranean Sea.

  1. The MROI fast tip-tilt correction and target acquisition system

    NASA Astrophysics Data System (ADS)

    Young, John; Buscher, David; Fisher, Martin; Haniff, Christopher; Rea, Alexander; Seneta, Eugene B.; Sun, Xiaowei; Wilson, Donald; Farris, Allen; Olivares, Andres; Selina, Robert

    2012-07-01

    The fast tip-tilt correction system for the Magdalena Ridge Observatory Interferometer (MROI) is being designed and fabricated by the University of Cambridge. The design of the system is currently at an advanced stage and the performance of its critical subsystems has been verified in the laboratory. The system has been designed to meet a demanding set of specifications including satisfying all performance requirements in ambient temperatures down to -5 °C, maintaining the stability of the tip-tilt fiducial over a 5 °C temperature change without recourse to an optical reference, and a target acquisition mode with a 60” field-of-view. We describe the important technical features of the system, which uses an Andor electron-multiplying CCD camera protected by a thermal enclosure, a transmissive optical system with mounts incorporating passive thermal compensation, and custom control software running under Xenomai real-time Linux. We also report results from laboratory tests that demonstrate (a) the high stability of the custom optic mounts and (b) the low readout and compute latencies that will allow us to achieve a 40 Hz closed-loop bandwidth on bright targets.

  2. Sophia Daemon Version 12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    Sophia Daemon Version 12 contains the code that is exclusively used by the ‘sophiad’ application. It runs as a service on a Linux host and analyzes network traffic obtained from libpcap and produces a network fingerprint based on hosts and channels. Sophia Daemon Version 12 can, if desired by the user, produce alerts when its fingerprint changes. Sophia Daemon Version 12 can receive data from another Sophia Daemon or raw packet data. It can output data to another Sophia Daemon Version 12, OglNet Version 12 or MySQL. Sophia Daemon Version 12 runs in a passive real-time manner that allows itmore » to be used on a SCADA network. Its network fingerprint is designed to be applicable to SCADA networks rather than general IT networks.« less

  3. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  4. ‘tripleint_cc’: A program for 2-centre variational leptonic Coulomb potential matrix elements using Hylleraas-type trial functions, with a performance optimization study

    NASA Astrophysics Data System (ADS)

    Plummer, M.; Armour, E. A. G.; Todd, A. C.; Franklin, C. P.; Cooper, J. N.

    2009-12-01

    We present a program used to calculate intricate three-particle integrals for variational calculations of solutions to the leptonic Schrödinger equation with two nuclear centres in which inter-leptonic distances (electron-electron and positron-electron) are included directly in the trial functions. The program has been used so far in calculations of He-H¯ interactions and positron H 2 scattering, however the precisely defined integrals are applicable to other situations. We include a summary discussion of how the program has been optimized from a 'legacy'-type code to a more modern high-performance code with a performance improvement factor of up to 1000. Program summaryProgram title: tripleint.cc Catalogue identifier: AEEV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 829 No. of bytes in distributed program, including test data, etc.: 91 798 Distribution format: tar.gz Programming language: Fortran 95 (fixed format) Computer: Modern PC (tested on AMD processor) [1], IBM Power5 [2] Cray XT4 [3], similar Operating system: Red Hat Linux [1], IBM AIX [2], UNICOS [3] Has the code been vectorized or parallelized?: Serial (multi-core shared memory may be needed for some large jobs) RAM: Dependent on parameter sizes and option to use intermediate I/O. Estimates for practical use: 0.5-2 GBytes (with intermediate I/O); 1-4 GBytes (all-memory: the preferred option). Classification: 2.4, 2.6, 2.7, 2.9, 16.5, 16.10, 20 Nature of problem: The 'tripleint.cc' code evaluates three-particle integrals needed in certain variational (in particular: Rayleigh-Ritz and generalized-Kohn) matrix elements for solution of the Schrödinger equation with two fixed centres (the solutions may then be used in subsequent dynamic nuclear calculations). Specifically the integrals are defined by Eq. (16) in the main text and contain terms proportional to r×r/r,i≠j,i≠k,j≠k, with r the distance between leptons i and j. The article also briefly describes the performance optimizations used to increase the speed of evaluation of the integrals enough to allow detailed testing and mapping of the effect of varying non-linear parameters in the variational trial functions. Solution method: Each integral is solved using prolate spheroidal coordinates and series expansions (with cut-offs) of the many-lepton expressions. 1-d integrals and sub-integrals are solved analytically by various means (the program automatically chooses the most accurate of the available methods for each set of parameters and function arguments), while two of the three integrations over the prolate spheroidal coordinates ' λ' are carried out numerically. Many similar integrals with identical non-linear variational parameters may be calculated with one call of the code. Restrictions: There are limits to the number of points for the numerical integrations, to the cut-off variable itaumax for the many-lepton series expansions, and to the maximum powers of Slater-like input functions. For runs near the limit of the cut-off variable and with certain small-magnitude values of variational non-linear parameters, the code can require large amounts of memory (an option using some intermediate I/O is included to offset this). Unusual features: In addition to the program, we also present a summary description of the techniques and ideology used to optimize the code, together with accuracy tests and indications of performance improvement. Running time: The test runs take 1-15 minutes on HPCx [2] as indicated in Section 5 of the main text. A practical run with 729 integrals, 40 quadrature points per dimension and itaumax = 8 took 150 minutes on a PC (e.g., [1]): a similar run with 'medium' accuracy, e.g. for parameter optimization (see Section 2 of the main text), with 30 points per dimension and itaumax = 6 took 35 minutes. References:PC: Memory: 2.72 GB, CPU: AMD Opteron 246 dual-core, 2×2 GHz, OS: GNU/Linux, kernel: Linux 2.6.9-34.0.2.ELsmp. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/ (visited May 2009). HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/ (visited May 2009).

  5. Naval Open Architecture Machinery Control Systems for Next Generation Integrated Power Systems

    DTIC Science & Technology

    2012-05-01

    PORTABLE) OS / RTOS ADAPTATION MIDDLEWARE (FOR OS PORTABILITY) MACHINERY CONTROLLER FRAMEWORK MACHINERY CONTROL SYSTEM SERVICES POWER CONTROL SYSTEM...SERVICES SHIP SYSTEM SERVICES TTY 0 TTY N … OPERATING SYSTEM ( OS / RTOS ) COMPUTER HARDWARE UDP IP TCP RAW DEV 0 DEV N … POWER MANAGEMENT CONTROLLER...operating systems (DOS, Windows, Linux, OS /2, QNX, SCO Unix ...) COMPUTERS: ISA compatible motherboards, workstations and portables (Compaq, Dell

  6. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  7. Field Instrumentation With Bricks: Wireless Networks Built From Tough, Cheap, Reliable Field Computers

    NASA Astrophysics Data System (ADS)

    Fatland, D. R.; Anandakrishnan, S.; Heavner, M.

    2004-12-01

    We describe tough, cheap, reliable field computers configured as wireless networks for distributed high-volume data acquisition and low-cost data recovery. Running under the GNU/Linux open source model these network nodes ('Bricks') are intended for either autonomous or managed deployment for many months in harsh Arctic conditions. We present here results from Generation-1 Bricks used in 2004 for glacier seismology research in Alaska and Antarctica and describe future generation Bricks in terms of core capabilities and a growing list of field applications. Subsequent generations of Bricks will feature low-power embedded architecture, large data storage capacity (GB), long range telemetry (15 km+ up from 3 km currently), and robust operational software. The list of Brick applications is growing to include Geodetic GPS, Bioacoustics (bats to whales), volcano seismicity, tracking marine fauna, ice sounding via distributed microwave receivers and more. This NASA-supported STTR project capitalizes on advancing computer/wireless technology to get scientists more data per research budget dollar, solving system integration problems and thereby getting researchers out of the hardware lab and into the field. One exemplary scenario: An investigator can install a Brick network in a remote polar environment to collect data for several months and then fly over the site to recover the data via wireless telemetry. In the past year Brick networks have moved beyond proof-of-concept to the full-bore development and testing stage; they will be a mature and powerful tool available for IPY 2007-8.

  8. Cloud prediction of protein structure and function with PredictProtein for Debian.

    PubMed

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Staniewski, Cedric; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome.

  9. Cloud Prediction of Protein Structure and Function with PredictProtein for Debian

    PubMed Central

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome. PMID:23971032

  10. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Unix Survival Guide.

    PubMed

    Stein, Lincoln D

    2015-09-03

    Most bioinformatics software has been designed to run on Linux and other Unix-like systems. Unix is different from most desktop operating systems because it makes extensive use of a text-only command-line interface. It can be a challenge to become familiar with the command line, but once a person becomes used to it, there are significant rewards, such as the ability to string a commonly used series of commands together with a script. This appendix will get you started with the command line and other Unix essentials. Copyright © 2015 John Wiley & Sons, Inc.

  12. REX3DV1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holm, Elizabeth A.

    2002-03-28

    This code is a FORTRAN code for three-dimensional Monte Carol Potts Model (MCPM) Recrystallization and grain growth. A continuum grain structure is mapped onto a three-dimensional lattice. The mapping procedure is analogous to color bitmapping the grain structure; grains are clusters of pixels (sites) of the same color (spin). The total system energy is given by the Pott Hamiltonian and the kinetics of grain growth are determined through a Monte Carlo technique with a nonconserved order parameter (Glauber dynamics). The code can be compiled and run on UNIX/Linux platforms.

  13. Technical improvements during 2005 at the La Plata Reflector TelescopeÂ

    NASA Astrophysics Data System (ADS)

    Bareilles, F. A.; Schwartz, M. A.; Garcia, R. E.; Solans, J. H.; Fernández Lajús, E.

    We present here the technical developments carried out at the 0.8-m Reflector telescope of the La Plata Observatory during 2005, namely: the development of a new software, running under GNU/Linux, for the control of the CCD Star I Camera; the design and construction of a infrared control for the telescope and dome movements; the calculation and building of the primary and secondary-mirror baffles. These are framed in a plan for improvement, updating and automatization of this historic telescope. FULL TEXT IN SPANISH

  14. ChronQC: a quality control monitoring system for clinical next generation sequencing.

    PubMed

    Tawari, Nilesh R; Seow, Justine Jia Wen; Perumal, Dharuman; Ow, Jack L; Ang, Shimin; Devasia, Arun George; Ng, Pauline C

    2018-05-15

    ChronQC is a quality control (QC) tracking system for clinical implementation of next-generation sequencing (NGS). ChronQC generates time series plots for various QC metrics to allow comparison of current runs to historical runs. ChronQC has multiple features for tracking QC data including Westgard rules for clinical validity, laboratory-defined thresholds and historical observations within a specified time period. Users can record their notes and corrective actions directly onto the plots for long-term recordkeeping. ChronQC facilitates regular monitoring of clinical NGS to enable adherence to high quality clinical standards. ChronQC is freely available on GitHub (https://github.com/nilesh-tawari/ChronQC), Docker (https://hub.docker.com/r/nileshtawari/chronqc/) and the Python Package Index. ChronQC is implemented in Python and runs on all common operating systems (Windows, Linux and Mac OS X). tawari.nilesh@gmail.com or pauline.c.ng@gmail.com. Supplementary data are available at Bioinformatics online.

  15. Data Collection with Linux in the Undergraduate Physics Lab

    NASA Astrophysics Data System (ADS)

    Ramey, R. Dwayne

    2004-11-01

    Electronic data devices such as photogates can greatly facilitate data collection in the undergraduate physics laboratory. Unfortunately, these devices have several practical drawbacks. While the photogates themselves are not particularly expensive, manufacturers of these devices have created intermediary hardware devices for data buffering and manipulation. These devices, while useful in some contexts, greatly increase the overall price of data collection and, through the use of proprietary software, limit the ability of the enduser to customize the software. As an alternative, I outline the procedure for establishing a computer-based data collection system that consists of opensource software and user constructed connections. The data collection system consists of the wiring needed to connect a data device to a computer and the software needed to collect and manipulate data. Data devices can be connected to a computer through either through the USB port or the gameport of a sound card. Software capable of collecting and manipulating the data from a photogate type device on a Linux system has been developed and will be discrussed. Results for typical undergraduate photogate based experiments will be shown, error limits and data collect rates will be discussed for both the gameport and USB connections.

  16. Digital Plasma Control System for Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Ferrara, M.; Wolfe, S.; Stillerman, J.; Fredian, T.; Hutchinson, I.

    2004-11-01

    A digital plasma control system (DPCS) has been designed to replace the present C-Mod system, which is based on hybrid analog-digital computer. The initial implementation of DPCS comprises two 64 channel, 16 bit, low-latency cPCI digitizers, each with 16 analog outputs, controlled by a rack-mounted single-processor Linux server, which also serves as the compute engine. A prototype system employing three older 32 channel digitizers was tested during the 2003-04 campaign. The hybrid's linear PID feedback system was emulated by IDL code executing a synchronous loop, using the same target waveforms and control parameters. Reliable real-time operation was accomplished under a standard Linux OS (RH9) by locking memory and disabling interrupts during the plasma pulse. The DPCS-computed outputs agreed to within a few percent with those produced by the hybrid system, except for discrepancies due to offsets and non-ideal behavior of the hybrid circuitry. The system operated reliably, with no sample loss, at more than twice the 10kHz design specification, providing extra time for implementing more advanced control algorithms. The code is fault-tolerant and produces consistent output waveforms even with 10% sample loss.

  17. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.

  18. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  19. ExScal Backbone Network Architecture

    DTIC Science & Technology

    2005-01-01

    802.11 battery powered nodes was laid over the sensor network. We adopted the Stargate platform for the backbone tier to serve as the basis for...its head. XSS Hardware and Network: XSS stands for eXtreme Scaling Stargate . A stargate is a linux-based single board computer. It has a 400 MHz

  20. Controller and data acquisition system for SIDECAR ASIC driven HAWAII detectors

    NASA Astrophysics Data System (ADS)

    Ramaprakash, Anamparambu; Burse, Mahesh; Chordia, Pravin; Chillal, Kalpesh; Kohok, Abhay; Mestry, Vilas; Punnadi, Sujit; Sinha, Sakya

    2010-07-01

    SIDECAR is an Application Specific Integrated Circuit (ASIC), which can be used for control and data acquisition from near-IR HAWAII detectors offered by Teledyne Imaging Sensors (TIS), USA. The standard interfaces provided by Teledyne are COM API and socket servers running under MS Windows platform. These interfaces communicate to the ASIC (and the detector) through an intermediate card called JWST ASIC Drive Electronics (JADE2). As part of an ongoing programme of several years, for developing astronomical focal plane array (CCDs, CMOS and Hybrid) controllers and data acquisition systems (CDAQs), IUCAA is currently developing the next generation controllers employing Virtex-5 family FPGA devices. We present here the capabilities which are built into these new CDAQs for handling HAWAII detectors. In our system, the computer which hosts the application programme, user interface and device drivers runs on a Linux platform. It communicates through a hot-pluggable USB interface (with an optional optical fibre extender) to the FPGA-based card which replaces the JADE2. The FPGA board in turn, controls the SIDECAR ASIC and through it a HAWAII-2RG detector, both of which are located in a cryogenic test Dewar set up which is liquid nitrogen cooled. The system can acquire data over 1, 4, or 32 readout channels, with or without binning, at different speeds, can define sub-regions for readout, offers various readout schemes like Fowler sampling, up-theramp etc. In this paper, we present the performance results obtained from a prototype system.

  1. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  2. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  3. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    NASA Astrophysics Data System (ADS)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  4. A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Wyss, Gregory Dane

    2004-07-01

    This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less

  5. Eternal Sunshine of the Spotless Machine: Protecting Privacy with Ephemeral Channels

    PubMed Central

    Dunn, Alan M.; Lee, Michael Z.; Jana, Suman; Kim, Sangman; Silberstein, Mark; Xu, Yuanzhong; Shmatikov, Vitaly; Witchel, Emmett

    2014-01-01

    Modern systems keep long memories. As we show in this paper, an adversary who gains access to a Linux system, even one that implements secure deallocation, can recover the contents of applications’ windows, audio buffers, and data remaining in device drivers—long after the applications have terminated. We design and implement Lacuna, a system that allows users to run programs in “private sessions.” After the session is over, all memories of its execution are erased. The key abstraction in Lacuna is an ephemeral channel, which allows the protected program to talk to peripheral devices while making it possible to delete the memories of this communication from the host. Lacuna can run unmodified applications that use graphics, sound, USB input devices, and the network, with only 20 percentage points of additional CPU utilization. PMID:24755709

  6. The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality

    NASA Technical Reports Server (NTRS)

    Conway, Darrel J.; Hughes, Steven P.

    2010-01-01

    The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).

  7. Thermal Analysis Methods for Aerobraking Heating

    NASA Technical Reports Server (NTRS)

    Amundsen, Ruth M.; Gasbarre, Joseph F.; Dec, John A.

    2005-01-01

    As NASA begins exploration of other planets, a method of non-propulsively slowing vehicles at the planet, aerobraking, may become a valuable technique for managing vehicle design mass and propellant. An example of this is Mars Reconnaissance Orbiter (MRO), which will launch in late 2005 and reach Mars in March of 2006. In order to save propellant, MRO will use aerobraking to modify the initial orbit at Mars. The spacecraft will dip into the atmosphere briefly on each orbit, and during the drag pass, the atmospheric drag on the spacecraft will slow it, thus lowering the orbit apoapsis. The largest area on the spacecraft, and that most affected by the heat generated during the aerobraking process, is the solar arrays. A thermal analysis of the solar arrays was conducted at NASA Langley, to simulate their performance throughout the entire roughly 6-month period of aerobraking. Several interesting methods were used to make this analysis more rapid and robust. Two separate models were built for this analysis, one in Thermal Desktop for radiation and orbital heating analysis, and one in MSC.Patran for thermal analysis. The results from the radiation model were mapped in an automated fashion to the Patran thermal model that was used to analyze the thermal behavior during the drag pass. A high degree of automation in file manipulation as well as other methods for reducing run time were employed, since toward the end of the aerobraking period the orbit period is short, and in order to support flight operations the runs must be computed rapidly. All heating within the Patran Thermal model was combined in one section of logic, such that data mapped from the radiation model and aeroheating model, as well as skin temperature effects on the aeroheating and surface radiation, could be incorporated easily. This approach calculates the aeroheating at any given node, based on its position and temperature as well as the density and velocity at that trajectory point. Run times on several different processors, computer hard drives, and operating systems (Windows versus Linux) were evaluated.

  8. The SAMI2 Open Source Project

    NASA Astrophysics Data System (ADS)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  9. PsyToolkit: a software package for programming psychological experiments using Linux.

    PubMed

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  10. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    PubMed

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  11. Potential performance bottleneck in Linux TCP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wenji; Crawford, Matt; /Fermilab

    2006-12-01

    TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system tomore » its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.« less

  12. Raspberry Pi: a 35-dollar device for viewing DICOM images.

    PubMed

    Paiva, Omir Antunes; Moreira, Renata de Oliveira

    2014-01-01

    Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost.

  13. Raspberry Pi: a 35-dollar device for viewing DICOM images*

    PubMed Central

    Paiva, Omir Antunes; Moreira, Renata de Oliveira

    2014-01-01

    Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost. PMID:25741057

  14. Preparing a scientific manuscript in Linux: Today's possibilities and limitations

    PubMed Central

    2011-01-01

    Background Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Findings Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux. PMID:22018246

  15. Interfacing the Controllogics PLC over Ethernet/IP.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasemir, K. U.; Dalesio, L. R.

    2001-01-01

    The Allen-Bradley ControlLogix [1] line of programmable logic controllers (PLCs) offers several interfaces: Ethernet, ControlNet, DeviceNet, RS-232 and others. The ControlLogix Ethernet interface module 1756-ENET uses EtherNet/IP, the ControlNet protocol [2], encapsulated in Ethernet packages, with specific service codes [3]. A driver for the Experimental Physics and Industrial Control System (EPICS) has been developed that utilizes this EtherNet/IP protocol for controllers running the vxWorks RTOS as well as a Win32 and Unix/Linux test program. Features, performance and limitations of this interface are presented.

  16. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    NASA Astrophysics Data System (ADS)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  17. Readout and trigger for the AFP detector at ATLAS experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocian, M.

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  18. Readout and trigger for the AFP detector at ATLAS experiment

    DOE PAGES

    Kocian, M.

    2017-01-25

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  19. SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.

    PubMed

    Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart

    2011-03-01

    The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Development and Implementation of GPS Correlator Structures in MATLAB and Simulink with Focus on SDR Applications: Implementation of a Standard GPS Correlator Architecture (Baseline) Implementation of the MIT Quicksynch Sparse Algorithm Development and Implementation of Parallel Circular Correlator Constructs

    DTIC Science & Technology

    2014-05-01

    software is available for a wide variety of operating systems , including Unix, FreeBSD, Linux, Solaris, Novell NetWare, OS X, Microsoft Windows, OS/2, TPF...Word for Xenix systems . Subsequent versions were later written for several other platforms including IBM PCs running DOS (1983), Apple Macintosh ...this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204

  1. Development of a Smart Mobile Data Module for Fetal Monitoring in E-Healthcare.

    PubMed

    Houzé de l'Aulnoit, Agathe; Boudet, Samuel; Génin, Michaël; Gautier, Pierre-François; Schiro, Jessica; Houzé de l'Aulnoit, Denis; Beuscart, Régis

    2018-03-23

    The fetal heart rate (FHR) is a marker of fetal well-being in utero (when monitoring maternal and/or fetal pathologies) and during labor. Here, we developed a smart mobile data module for the remote acquisition and transmission (via a Wi-Fi or 4G connection) of FHR recordings, together with a web-based viewer for displaying the FHR datasets on a computer, smartphone or tablet. In order to define the features required by users, we modelled the fetal monitoring procedure (in home and hospital settings) via semi-structured interviews with midwives and obstetricians. Using this information, we developed a mobile data transfer module based on a Raspberry Pi. When connected to a standalone fetal monitor, the module acquires the FHR signal and sends it (via a Wi-Fi or a 3G/4G mobile internet connection) to a secure server within our hospital information system. The archived, digitized signal data are linked to the patient's electronic medical records. An HTML5/JavaScript web viewer converts the digitized FHR data into easily readable and interpretable graphs for viewing on a computer (running Windows, Linux or MacOS) or a mobile device (running Android, iOS or Windows Phone OS). The data can be viewed in real time or offline. The application includes tools required for correct interpretation of the data (signal loss calculation, scale adjustment, and precise measurements of the signal's characteristics). We performed a proof-of-concept case study of the transmission, reception and visualization of FHR data for a pregnant woman at 30 weeks of amenorrhea. She was hospitalized in the pregnancy assessment unit and FHR data were acquired three times a day with a Philips Avalon® FM30 fetal monitor. The prototype (Raspberry Pi) was connected to the fetal monitor's RS232 port. The emission and reception of prerecorded signals were tested and the web server correctly received the signals, and the FHR recording was visualized in real time on a computer, a tablet and smartphones (running Android and iOS) via the web viewer. This process did not perturb the hospital's computer network. There was no data delay or loss during a 60-min test. The web viewer was tested successfully in the various usage situations. The system was as user-friendly as expected, and enabled rapid, secure archiving. We have developed a system for the acquisition, transmission, recording and visualization of RCF data. Healthcare professionals can view the FHR data remotely on their computer, tablet or smartphone. Integration of FHR data into a hospital information system enables optimal, secure, long-term data archiving.

  2. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  3. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  4. Computing and combustion

    NASA Technical Reports Server (NTRS)

    Thompson, Daniel

    2004-01-01

    Coming into the Combustion Branch of the Turbomachinery and Propulsion Systems Division, there was not any set project planned out for me to work on. This was understandable, considering I am only at my sophmore year in college. Also, my mentor was a division chief and it was expected that I would be passed down the line. It took about a week for me to be placed with somebody who could use me. My first project was to write a macro for TecPlot. Commonly, a person would have a 3D contour volume modeling something such as a combustion engine. This 3D volume needed to have slices extracted from it and made into 2D scientific plots with all of the appropriate axis and titles. This was very tedious to do by hand. My macro needed to automate the process. There was some education I needed before I could start, however. First, TecPlot ran on Unix and Linux, like a growing majority of scientific applications. I knew a little about Linux, but I would need to know more to use the software at hand. I took two classes at the Learning Center on Unix and am now comfortable with Linux and Unix. I already had taken Computer Science I and II, and had undergone the transformation from Computer Programmer to Procedural Epistemologist. I knew how to design efficient algorithms, I just needed to learn the macro language. After a little less than a week, I had learned the basics of the language. Like most languages, the best way to learn more of it was by using it. It was decided that it was best that I do the macro in layers, starting simple and adding features as I went. The macro started out slicing with respect to only one axis, and did not make 2D plots out of the slices. Instead, it lined them up inside the solid. Next, I allowed for more than one axis and placed each slice in a separate frame. After this, I added code that transformed each individual slice-frame into a scientific plot. I also made frames for composite volumes, which showed all of the slices in the same XYZ space. I then designed an addition companion macro that exported each frame into its own image file. I then distributed the macros to a test group, and am awaiting feedback. In the meantime, a am researching the possible applications of distributed computing on the National Combustor Code. Many of our Linux boxes were idle for most of the day. The department thinks that it would be wonderful if we could get all of these idle processors to work on a problem under the NCC code. The client software would have to be easily distributed, such as in screensaver format or as a program that only ran when the computer was not in use. This project proves to be an interesting challenge.

  5. Abstract of talk for Silicon Valley Linux Users Group

    NASA Technical Reports Server (NTRS)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  6. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  7. Calculation of four-particle harmonic-oscillator transformation brackets

    NASA Astrophysics Data System (ADS)

    Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.

    2010-02-01

    A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.

  8. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  9. Cloud-based interactive analytics for terabytes of genomic variants data.

    PubMed

    Pan, Cuiping; McInnes, Gregory; Deflaux, Nicole; Snyder, Michael; Bingham, Jonathan; Datta, Somalee; Tsao, Philip S

    2017-12-01

    Large scale genomic sequencing is now widely used to decipher questions in diverse realms such as biological function, human diseases, evolution, ecosystems, and agriculture. With the quantity and diversity these data harbor, a robust and scalable data handling and analysis solution is desired. We present interactive analytics using a cloud-based columnar database built on Dremel to perform information compression, comprehensive quality controls, and biological information retrieval in large volumes of genomic data. We demonstrate such Big Data computing paradigms can provide orders of magnitude faster turnaround for common genomic analyses, transforming long-running batch jobs submitted via a Linux shell into questions that can be asked from a web browser in seconds. Using this method, we assessed a study population of 475 deeply sequenced human genomes for genomic call rate, genotype and allele frequency distribution, variant density across the genome, and pharmacogenomic information. Our analysis framework is implemented in Google Cloud Platform and BigQuery. Codes are available at https://github.com/StanfordBioinformatics/mvp_aaa_codelabs. cuiping@stanford.edu or ptsao@stanford.edu. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  10. Resource Isolation Method for Program’S Performance on CMP

    NASA Astrophysics Data System (ADS)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  11. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  12. Graphics interfaces and numerical simulations: Mexican Virtual Solar Observatory

    NASA Astrophysics Data System (ADS)

    Hernández, L.; González, A.; Salas, G.; Santillán, A.

    2007-08-01

    Preliminary results associated to the computational development and creation of the Mexican Virtual Solar Observatory (MVSO) are presented. Basically, the MVSO prototype consists of two parts: the first, related to observations that have been made during the past ten years at the Solar Observation Station (EOS) and at the Carl Sagan Observatory (OCS) of the Universidad de Sonora in Mexico. The second part is associated to the creation and manipulation of a database produced by numerical simulations related to solar phenomena, we are using the MHD ZEUS-3D code. The development of this prototype was made using mysql, apache, java and VSO 1.2. based GNU and `open source philosophy'. A graphic user interface (GUI) was created in order to make web-based, remote numerical simulations. For this purpose, Mono was used, because it is provides the necessary software to develop and run .NET client and server applications on Linux. Although this project is still under development, we hope to have access, by means of this portal, to other virtual solar observatories and to be able to count on a database created through numerical simulations or, given the case, perform simulations associated to solar phenomena.

  13. XOP: a multiplatform graphical user interface for synchrotron radiation spectral and optics calculations

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Dejus, Roger J.

    1997-11-01

    XOP (X-ray OPtics utilities) is a graphical user interface (GUI) created to execute several computer programs that calculate the basic information needed by a synchrotron beamline scientist (designer or experimentalist). Typical examples of such calculations are: insertion device (undulator or wiggler) spectral and angular distributions, mirror and multilayer reflectivities, and crystal diffraction profiles. All programs are provided to the user under a unified GUI, which greatly simplifies their execution. The XOP optics applications (especially mirror calculations) take their basic input (optical constants, compound and mixture tables) from a flexible file-oriented database, which allows the user to select data from a large number of choices and also to customize their own data sets. XOP includes many mathematical and visualization capabilities. It also permits the combination of reflectivities from several mirrors and filters, and their effect, onto a source spectrum. This feature is very useful when calculating thermal load on a series of optical elements. The XOP interface is written in the IDL (Interactive Data Language). An embedded version of XOP, which freely runs under most Unix platforms (HP, Sun, Dec, Linux, etc) and under Windows95 and NT, is available upon request.

  14. Cloud-based interactive analytics for terabytes of genomic variants data

    PubMed Central

    Pan, Cuiping; McInnes, Gregory; Deflaux, Nicole; Snyder, Michael; Bingham, Jonathan; Datta, Somalee; Tsao, Philip S

    2017-01-01

    Abstract Motivation Large scale genomic sequencing is now widely used to decipher questions in diverse realms such as biological function, human diseases, evolution, ecosystems, and agriculture. With the quantity and diversity these data harbor, a robust and scalable data handling and analysis solution is desired. Results We present interactive analytics using a cloud-based columnar database built on Dremel to perform information compression, comprehensive quality controls, and biological information retrieval in large volumes of genomic data. We demonstrate such Big Data computing paradigms can provide orders of magnitude faster turnaround for common genomic analyses, transforming long-running batch jobs submitted via a Linux shell into questions that can be asked from a web browser in seconds. Using this method, we assessed a study population of 475 deeply sequenced human genomes for genomic call rate, genotype and allele frequency distribution, variant density across the genome, and pharmacogenomic information. Availability and implementation Our analysis framework is implemented in Google Cloud Platform and BigQuery. Codes are available at https://github.com/StanfordBioinformatics/mvp_aaa_codelabs. Contact cuiping@stanford.edu or ptsao@stanford.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28961771

  15. Computational tool for simulation of power and refrigeration cycles

    NASA Astrophysics Data System (ADS)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  16. Eye-in-Hand Manipulation for Remote Handling: Experimental Setup

    NASA Astrophysics Data System (ADS)

    Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador

    2018-03-01

    A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.

  17. Electronics and Software Engineer for Robotics Project Intern

    NASA Technical Reports Server (NTRS)

    Teijeiro, Antonio

    2017-01-01

    I was assigned to mentor high school students for the 2017 First Robotics Competition. Using a team based approach, I worked with the students to program the robot and applied my electrical background to build the robot from start to finish. I worked with students who had an interest in electrical engineering to teach them about voltage, current, pulse width modulation, solenoids, electromagnets, relays, DC motors, DC motor controllers, crimping and soldering electrical components, Java programming, and robotic simulation. For the simulation, we worked together to generate graphics files, write simulator description format code, operate Linux, and operate SOLIDWORKS. Upon completion of the FRC season, I transitioned over to providing full time support for the LCS hardware team. During this phase of my internship I helped my co-intern write test steps for two networking hardware DVTs , as well as run cables and update cable running lists.

  18. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  19. RTS2: a powerful robotic observatory manager

    NASA Astrophysics Data System (ADS)

    Kubánek, Petr; Jelínek, Martin; Vítek, Stanislav; de Ugarte Postigo, Antonio; Nekola, Martin; French, John

    2006-06-01

    RTS2, or Remote Telescope System, 2nd Version, is an integrated package for remote telescope control under the Linux operating system. It is designed to run in fully autonomous mode, picking targets from a database table, storing image meta data to the database, processing images and storing their WCS coordinates in the database and offering Virtual-Observatory enabled access to them. It is currently running on various telescope setups world-wide. For control of devices from various manufacturers we developed an abstract device layer, enabling control of all possible combinations of mounts, CCDs, photometers, roof and cupola controllers. We describe the evolution of RTS2 from Python-based RTS to C and later C++ based RTS2, focusing on the problems we faced during development. The internal structure of RTS2, focusing on object layering, which is used to uniformly control various devices and provides uniform reporting layer, is also discussed.

  20. BioRuby: bioinformatics software for the Ruby programming language.

    PubMed

    Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki

    2010-10-15

    The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it supports many widely used data formats and provides easy access to databases, external programs and public web services, including BLAST, KEGG, GenBank, MEDLINE and GO. BioRuby comes with a tutorial, documentation and an interactive environment, which can be used in the shell, and in the web browser. BioRuby is free and open source software, made available under the Ruby license. BioRuby runs on all platforms that support Ruby, including Linux, Mac OS X and Windows. And, with JRuby, BioRuby runs on the Java Virtual Machine. The source code is available from http://www.bioruby.org/. katayama@bioruby.org

  1. A ChIP-Seq Data Analysis Pipeline Based on Bioconductor Packages.

    PubMed

    Park, Seung-Jin; Kim, Jong-Hwan; Yoon, Byung-Ha; Kim, Seon-Young

    2017-03-01

    Nowadays, huge volumes of chromatin immunoprecipitation-sequencing (ChIP-Seq) data are generated to increase the knowledge on DNA-protein interactions in the cell, and accordingly, many tools have been developed for ChIP-Seq analysis. Here, we provide an example of a streamlined workflow for ChIP-Seq data analysis composed of only four packages in Bioconductor: dada2, QuasR, mosaics, and ChIPseeker. 'dada2' performs trimming of the high-throughput sequencing data. 'QuasR' and 'mosaics' perform quality control and mapping of the input reads to the reference genome and peak calling, respectively. Finally, 'ChIPseeker' performs annotation and visualization of the called peaks. This workflow runs well independently of operating systems (e.g., Windows, Mac, or Linux) and processes the input fastq files into various results in one run. R code is available at github: https://github.com/ddhb/Workflow_of_Chipseq.git.

  2. A ChIP-Seq Data Analysis Pipeline Based on Bioconductor Packages

    PubMed Central

    Park, Seung-Jin; Kim, Jong-Hwan; Yoon, Byung-Ha; Kim, Seon-Young

    2017-01-01

    Nowadays, huge volumes of chromatin immunoprecipitation-sequencing (ChIP-Seq) data are generated to increase the knowledge on DNA-protein interactions in the cell, and accordingly, many tools have been developed for ChIP-Seq analysis. Here, we provide an example of a streamlined workflow for ChIP-Seq data analysis composed of only four packages in Bioconductor: dada2, QuasR, mosaics, and ChIPseeker. ‘dada2’ performs trimming of the high-throughput sequencing data. ‘QuasR’ and ‘mosaics’ perform quality control and mapping of the input reads to the reference genome and peak calling, respectively. Finally, ‘ChIPseeker’ performs annotation and visualization of the called peaks. This workflow runs well independently of operating systems (e.g., Windows, Mac, or Linux) and processes the input fastq files into various results in one run. R code is available at github: https://github.com/ddhb/Workflow_of_Chipseq.git. PMID:28416945

  3. Introduction to Computational Physics for Undergraduates

    NASA Astrophysics Data System (ADS)

    Zubairi, Omair; Weber, Fridolin

    2018-03-01

    This is an introductory textbook on computational methods and techniques intended for undergraduates at the sophomore or junior level in the fields of science, mathematics, and engineering. It provides an introduction to programming languages such as FORTRAN 90/95/2000 and covers numerical techniques such as differentiation, integration, root finding, and data fitting. The textbook also entails the use of the Linux/Unix operating system and other relevant software such as plotting programs, text editors, and mark up languages such as LaTeX. It includes multiple homework assignments.

  4. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.

  5. PLNoise: a package for exact numerical simulation of power-law noises

    NASA Astrophysics Data System (ADS)

    Milotti, Edoardo

    2006-08-01

    Many simulations of stochastic processes require colored noises: here I describe a small program library that generates samples with a tunable power-law spectral density: the algorithm can be modified to generate more general colored noises, and is exact for all time steps, even when they are unevenly spaced (as may often happen in the case of astronomical data, see e.g. [N.R. Lomb, Astrophys. Space Sci. 39 (1976) 447]. The method is exact in the sense that it reproduces a process that is theoretically guaranteed to produce a range-limited power-law spectrum 1/f with -1<β⩽1. The algorithm has a well-behaved computational complexity, it produces a nearly perfect Gaussian noise, and its computational efficiency depends on the required degree of noise Gaussianity. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v1_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler No. of lines in distributed program, including test data, etc.:6238 No. of bytes in distributed program, including test data, etc.:52 387 Distribution format:tar.gz RAM: The code of the test program is very compact (about 50 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run (like the one discussed in Section 4 in the long write-up) with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes. External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell, ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press, Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. Nature of problem: Exact generation of different types of Gaussian colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701]. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 4 in this work, the generation routine took on average about 7 ms for each sample.

  6. Enhancements to the Sentinel Fireball Network Video Software

    NASA Astrophysics Data System (ADS)

    Watson, Wayne

    2009-05-01

    The Sentinel Fireball Network that supports meteor imaging of bright meteors (fireballs) has been in existence for over ten years. Nearly five years ago it moved from gathering meteor data with a camera and VCR video tape to a fisheye lens attached to a hardware device, the Sentinel box, which allowed meteor data to be recorded on a PC operating under real-time Linux. In 2006, that software, sentuser, was made available on Apple, Linux, and Window operating systems using the Python computer language. It provides basic video and management functionality and a small amount of analytic software capability. This paper describes the new and attractive future features of the software, and, additionally, it reviews some of the research and networks from the past and present using video equipment to collect and analyze fireball data that have applicability to sentuser.

  7. Configuring the HYSPLIT Model for National Weather Service Forecast Office and Spaceflight Meteorology Group Applications

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph G.

    2009-01-01

    For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.

  8. Peregrine System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y

  9. A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium

    ERIC Educational Resources Information Center

    Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.

    2016-01-01

    In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…

  10. Bridging the Arts and Computer Science: Engaging At-Risk Students through the Integration of Music

    ERIC Educational Resources Information Center

    Moyer, Lisa; Klopfer, Michelle; Ernst, Jeremy V.

    2018-01-01

    Linux Laptop Orchestra (L2Ork), founded in 2009 in the Virginia Tech Music Department's Digital and Interactive Sound & Intermedia Studio, "explores the power of gesture, communal interaction, and the multidimensionality of arts, as well as technology's potential to seamlessly integrate arts and sciences with particular focus on K-12…

  11. Decay of super-heavy particles: user guide of the SHdecay program

    NASA Astrophysics Data System (ADS)

    Barbot, C.

    2004-02-01

    I give here a detailed user guide for the C++ program SHdecay, which has been developed for computing the final spectra of stable particles (protons, photons, LSPs, electrons, neutrinos of the three species and their antiparticles) arising from the decay of a super-heavy X particle. It allows to compute in great detail the complete decay cascade for any given decay mode into particles of the Minimal Supersymmetric Standard Model (MSSM). In particular, it takes into account all interactions of the MSSM during the perturbative cascade (including not only QCD, but also the electroweak and 3rd generation Yukawa interactions), and includes a detailed treatment of the SUSY decay cascade (for a given set of parameters) and of the non-perturbative hadronization process. All these features allow us to ensure energy conservation over the whole cascade up to a numerical accuracy of a few per mille. Yet, this program also allows to restrict the computation to QCD or SUSY-QCD frameworks. I detail the input and output files, describe the role of each part of the program, and include some advice for using it best. Program summaryTitle of program: SHdecay Catalogue identifier:ADSL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer and operating system: Program tested on PC running Linux KDE and Suse 8.1 Programming language used: C with STL C++ library and using the standard gnu g++ compiler No. lines in distributed program: 14 955 No. of bytes in distributed program, including test data, etc.: 624 487 Distribution format: tar gzip file Keywords: Super-heavy particles, fragmentation functions, DGLAP equations, supersymmetry, MSSM, UHECR Nature of physical problem: Obtaining the energy spectra of the final stable decay products (protons, photons, electrons, the three species of neutrinos and the LSPs) of a decaying super-heavy X particle, within the framework of the Minimal Supersymmetric Standard Model (MSSM). It can be done numerically by solving the full set of DGLAP equations in the MSSM for the perturbative evolution of the fragmentation functions Dp2p1( x, Q) of any particle p1 into any other p2 ( x is the energy fraction carried by the particle p2 and Q its virtuality), and by treating properly the different decay cascades of all unstable particles and the final hadronization of quarks and gluons. In order to obtain proper results at very low values of x (up to x˜10 -13), NLO color coherence effects have been included by using the Modified Leading Log Approximation (MLLA). Method of solution: the DGLAP equations are solved by a four order Runge-Kutta method with a fixed step. Typical running time: Around 35 hours for the first run, but the most time consuming sub-programs can be run only once for most applications.

  12. CompactPCI/Linux Platform in FTU Slow Control System

    NASA Astrophysics Data System (ADS)

    Iannone, F.; Wang, L.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.

    2004-12-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called `Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipments; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, we have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself.

  13. The TSO Logic and G2 Software Product

    NASA Technical Reports Server (NTRS)

    Davis, Derrick D.

    2014-01-01

    This internship assignment for spring 2014 was at John F. Kennedy Space Center (KSC), in NASAs Engineering and Technology (NE) group in support of the Control and Data Systems Division (NE-C) within the Systems Hardware Engineering Branch. (NEC-4) The primary focus was in system integration and benchmarking utilizing two separate computer software products. The first half of this 2014 internship is spent in assisting NE-C4s Electronics and Embedded Systems Engineer, Kelvin Ruiz and fellow intern Scott Ditto with the evaluation of a newly piece of software, called G2. Its developed by the Gensym Corporation and introduced to the group as a tool used in monitoring launch environments. All fellow interns and employees of the G2 group have been working together in order to better understand the significance of the G2 application and how KSC can benefit from its capabilities. The second stage of this Spring project is to assist with an ongoing integration of a benchmarking tool, developed by a group of engineers from a Canadian based organization known as TSO Logic. Guided by NE-C4s Computer Engineer, Allen Villorin, NASA 2014 interns put forth great effort in helping to integrate TSOs software into the Spaceport Processing Systems Development Laboratory (SPSDL) for further testing and evaluating. The TSO Logic group claims that their software is designed for, monitoring and reducing energy consumption at in-house server farms and large data centers, allows data centers to control the power state of servers, without impacting availability or performance and without changes to infrastructure and the focus of the assignment is to test this theory. TSOs Aaron Rallo Founder and CEO, and Chris Tivel CTO, both came to KSC to assist with the installation of their software in the SPSDL laboratory. TSOs software is installed onto 24 individual workstations running three different operating systems. The workstations were divided into three groups of 8 with each group having its own operating system. The first group is comprised of Ubuntus Debian -based Linux the second group is windows 7 Professional and the third group ran Red Hat Linux. The highlight of this portion of the assignment is to compose documentation expressing the overall impression of the software and its capabilities.

  14. cloudPEST - A python module for cloud-computing deployment of PEST, a program for parameter estimation

    USGS Publications Warehouse

    Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.

    2011-01-01

    This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).

  15. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  16. On the concept of the interactive information and simulation system for gas dynamics and multiphysics problems

    NASA Astrophysics Data System (ADS)

    Bessonov, O.; Silvestrov, P.

    2017-02-01

    This paper describes the general idea and the first implementation of the Interactive information and simulation system - an integrated environment that combines computational modules for modeling the aerodynamics and aerothermodynamics of re-entry space vehicles with the large collection of different information materials on this topic. The internal organization and the composition of the system are described and illustrated. Examples of the computational and information output are presented. The system has the unified implementation for Windows and Linux operation systems and can be deployed on any modern high-performance personal computer.

  17. Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Graß, Tobias

    2012-03-01

    We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.

  18. The VOLSCAT package for electron and positron scattering of molecular targets: A new high throughput approach to cross-section and resonances computation

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    VOLSCAT is a computer program which implements the Single Center Expansion (SCE) method to solve the scattering equation for the elastic collision of electrons/positrons off molecular targets. The scattering potential needed is calculated by on-the-fly calls to the external SCELib library for molecular properties, recently ported to GPU computing environment and ClearSpeed platforms, and made available by means of an Application Program Interface (SCELib-API) which is also provided with the VOLSCAT package in a beta version. The result is a high throughput approach to the solution of the complex e/e-molecule scattering problem, with allows for intensive calculations both for the number of systems which can be studied and for their size. Accurate partial and total elastic cross sections are produced in output together with the associated eigenphase sums. Indirect scattering processes arising from the formation of temporary negative ions can also be analyzed through the computation of the resonances' parameters. Program summaryProgram title: VOLSCAT V1.0 Catalogue identifier: AEEW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4 618 353 No. of bytes in distributed program, including test data, etc.: 120 307 536 Distribution format: tar.gz Programming language: Fortran90 Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. The parallel version in the present release of the code is limited to the OpenMP calculation of the exchange potential V or V. The number of OpenMP threads can then be set in the input script. RAM: For a typical (isolated) biomolecule (e.g. Cytosine or Ribose) a converged calculation would require from 320 MB up to 2.5 GB. Word size: 64 bits Classification: 16.5 External routines: LAPACK (dsyev, dgetri, dgetrf) ( http://www.netlib.org/lapack/) Nature of problem: In this set of codes an efficient procedure is implemented to calculate partial cross section for the scattering between an electron/positron and a molecular target as a function of the collision energies. Solution method: The scattering equations are derived in the framework of the Single Center Expansion (SCE) procedure which allows the reduction of the original three-dimensional problem to a radial (one-dimensional) equation through the expansion of the scattering potential and the system wavefunction in a set of symmetry-adapted (real) spherical harmonics. The local part of the electrostatic interaction between the charged projectile (electron/positron) and the molecular target is provided in input by the SCELib library, which also provides the correlation and polarization corrections for the short-range and long-range part, respectively, of the interaction. A proper Application Programming Interface (API) is used by VOLSCAT to load the energy-independent part of the potential while the non-local exchange contribution is approximated by a local form and calculated on the fly in the VOLSCAT run for each desired collision energy. The resulting SCE one-dimensional homogeneous scattering equation is rewritten in an integral form by means of the standard Green's function technique resulting in a set of Volterra coupled equations which are solved to give the phase shifts and cross sections for any desired impact energy in terms of the partial components defined by the irreducible representations of the symmetry point group to which the target molecule belongs. The total cross section can then be straightforwardly calculated by summing over all the partial cross sections produced in the output. By the Breit-Wigner analysis of the eigenphase sum produced as a function of the energy one can also get information on the location of possible resonance states arising in the collision process. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. Additional comments: A beta version of SCELib-API is included in the distribution package. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the (r,θ,φ) grid size and to the number of angular basis functions used.

  19. Browndye: A software package for Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Huber, Gary A.; McCammon, J. Andrew

    2010-11-01

    A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. Program summaryProgram title: Browndye Catalogue identifier: AEGT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license, included in distribution No. of lines in distributed program, including test data, etc.: 143 618 No. of bytes in distributed program, including test data, etc.: 1 067 861 Distribution format: tar.gz Programming language: C++, OCaml ( http://caml.inria.fr/) Computer: PC, Workstation, Cluster Operating system: Linux Has the code been vectorised or parallelized?: Yes. Runs on multiple processors with shared memory using pthreads RAM: Depends linearly on size of physical system Classification: 3 External routines: uses the output of APBS [1] ( http://www.poissonboltzmann.org/apbs/) as input. APBS must be obtained and installed separately. Expat 2.0.1, CLAPACK, ocaml-expat, Mersenne Twister. These are included in the Browndye distribution. Nature of problem: Exploration and determination of rate constants of bimolecular interactions involving large biological molecules. Solution method: Brownian dynamics with electrostatic, excluded volume, van der Waals, and desolvation forces. Running time: Depends linearly on size of physical system and quadratically on precision of results. The included example executes in a few minutes.

  20. Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David

    2004-05-01

    A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.

  1. Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan

    2015-01-01

    Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.

  2. Assessment of feasibility of running RSNA's MIRC on a Raspberry Pi: a cost-effective solution for teaching files in radiology.

    PubMed

    Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E

    2015-11-01

    The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.

  3. Vectorized data acquisition and fast triple-correlation integrals for Fluorescence Triple Correlation Spectroscopy

    NASA Astrophysics Data System (ADS)

    Ridgeway, William K.; Millar, David P.; Williamson, James R.

    2013-04-01

    Fluorescence Correlation Spectroscopy (FCS) is widely used to quantify reaction rates and concentrations of molecules in vitro and in vivo. We recently reported Fluorescence Triple Correlation Spectroscopy (F3CS), which correlates three signals together instead of two. F3CS can analyze the stoichiometries of complex mixtures and detect irreversible processes by identifying time-reversal asymmetries. Here we report the computational developments that were required for the realization of F3CS and present the results as the Triple Correlation Toolbox suite of programs. Triple Correlation Toolbox is a complete data analysis pipeline capable of acquiring, correlating and fitting large data sets. Each segment of the pipeline handles error estimates for accurate error-weighted global fitting. Data acquisition was accelerated with a combination of off-the-shelf counter-timer chips and vectorized operations on 128-bit registers. This allows desktop computers with inexpensive data acquisition cards to acquire hours of multiple-channel data with sub-microsecond time resolution. Off-line correlation integrals were implemented as a two delay time multiple-tau scheme that scales efficiently with multiple processors and provides an unprecedented view of linked dynamics. Global fitting routines are provided to fit FCS and F3CS data to models containing up to ten species. Triple Correlation Toolbox is a complete package that enables F3CS to be performed on existing microscopes. Catalogue identifier: AEOP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50189 No. of bytes in distributed program, including test data, etc.: 6135283 Distribution format: tar.gz Programming language: C/Assembly. Computer: Any with GCC and library support. Operating system: Linux and OS X (data acq. for Linux only due to library availability), not tested on Windows. RAM: ≥512 MB. Classification: 16.4. External routines: NIDAQmx (National Instruments), Gnu Scientific Library, GTK+, PLplot (optional) Nature of problem: Fluorescence Triple Correlation Spectroscopy required three things: data acquisition at faster speeds than were possible without expensive custom hardware, triple-correlation routines that could process 1/2 TB data sets rapidly, and fitting routines capable of handling several to a hundred fit parameters and 14,000 + data points, each with error estimates. Solution method: A novel data acquisition concept mixed signal processing with off-the-shelf hardware and data-parallel processing using 128-bit registers found in desktop CPUs. Correlation algorithms used fractal data structures and multithreading to reduce data analysis times. Global fitting was implemented with robust minimization routines and provides feedback that allows the user to critically inspect initial guesses and fits. Restrictions: Data acquisition only requires a National Instruments data acquisition card (it was tested on Linux using card PCIe-6251) and a simple home-built circuit. Unusual features: Hand-coded ×86-64 assembly for data acquisition loops (platform-independent C code also provided). Additional comments: A complete collection of tools to perform Fluorescence Triple Correlation Spectroscopy-from data acquisition to two-tau correlation of large data sets, to model fitting. Running time: 1-5 h of data analysis per hour of data collected. Varies depending on data-acquisition length, time resolution, data density and number of cores used for correlation integrals.

  4. JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices

    NASA Astrophysics Data System (ADS)

    Bollhöfer, Matthias; Notay, Yvan

    2007-12-01

    A new software code for computing selected eigenvalues and associated eigenvectors of a real symmetric matrix is described. The eigenvalues are either the smallest or those closest to some specified target, which may be in the interior of the spectrum. The underlying algorithm combines the Jacobi-Davidson method with efficient multilevel incomplete LU (ILU) preconditioning. Key features are modest memory requirements and robust convergence to accurate solutions. Parameters needed for incomplete LU preconditioning are automatically computed and may be updated at run time depending on the convergence pattern. The software is easy to use by non-experts and its top level routines are written in FORTRAN 77. Its potentialities are demonstrated on a few applications taken from computational physics. Program summaryProgram title: JADAMILU Catalogue identifier: ADZT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 359 No. of bytes in distributed program, including test data, etc.: 7 493 144 Distribution format: tar.gz Programming language: Fortran 77 Computer: Intel or AMD with g77 and pgf; Intel EM64T or Itanium with ifort; AMD Opteron with g77, pgf and ifort; Power (IBM) with xlf90. Operating system: Linux, AIX RAM: problem dependent Word size: real:8; integer: 4 or 8, according to user's choice Classification: 4.8 Nature of problem: Any physical problem requiring the computation of a few eigenvalues of a symmetric matrix. Solution method: Jacobi-Davidson combined with multilevel ILU preconditioning. Additional comments: We supply binaries rather than source code because JADAMILU uses the following external packages: MC64. This software is copyrighted software and not freely available. COPYRIGHT (c) 1999 Council for the Central Laboratory of the Research Councils. AMD. Copyright (c) 2004-2006 by Timothy A. Davis, Patrick R. Amestoy, and Iain S. Duff. Source code is distributed by the authors under the GNU LGPL licence. BLAS. The reference BLAS is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web. LAPACK. The complete LAPACK package or individual routines from LAPACK are freely available on netlib and can be obtained via the World Wide Web or anonymous ftp. For maximal benefit to the community, we added the sources we are proprietary of to the tar.gz file submitted for inclusion in the CPC library. However, as explained in the README file, users willing to compile the code instead of using binaries should first obtain the sources for the external packages mentioned above (email and/or web addresses are provided). Running time: Problem dependent; the test examples provided with the code only take a few seconds to run; timing results for large scale problems are given in Section 5.

  5. Runwien: a text-based interface for the WIEN package

    NASA Astrophysics Data System (ADS)

    Otero de la Roza, A.; Luaña, Víctor

    2009-05-01

    A new text-based interface for WIEN2k, the full-potential linearized augmented plane-waves (FPLAPW) program, is presented. This code provides an easy to use, yet powerful way of generating arbitrarily large sets of calculations. Thus, properties over a potential energy surface and WIEN2k parameter exploration can be calculated using a simple input text file. This interface also provides new capabilities to the WIEN2k package, such as the calculation of elastic constants on hexagonal systems or the automatic gathering of relevant information. Additionally, runwien is modular, flexible and intuitive. Program summaryProgram title: runwien Catalogue identifier: AECM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 3 No. of lines in distributed program, including test data, etc.: 62 567 No. of bytes in distributed program, including test data, etc.: 610 973 Distribution format: tar.gz Programming language: gawk (with locale POSIX or similar) Computer: All running Unix, Linux Operating system: Unix, GNU/Linux Classification: 7.3 External routines: WIEN2k ( http://www.wien2k.at/), GAWK ( http://www.gnu.org/software/gawk/), rename by L. Wall, a Perl script which renames files, modified by R. Barker to check for the existence of target files, gnuplot ( http://www.gnuplot.info/) Subprograms used:Cat Id: ADSY_v1_0/AECB_v1_0, Title: GIBBS/CRITIC, Reference: CPC 158 (2004) 57/CPC 999 (2009) 999 Nature of problem: Creation of a text-based, batch-oriented interface for the WIEN2k package. Solution method: WIEN2k solves the Kohn-Sham equations of a solid using the FPLAPW formalism. Runwien interprets an input file containing the description of the geometry and structure of the solid and drives the execution of the WIEN2k programs. The input is simplified thanks to the default values of the WIEN2k parameters known to runwien. Additional comments: Designed for WIEN2k versions 06.4, 07.2, 08.2, and 08.3. Running time: For the test case (TiC), a single geometry takes 5 to 10 minutes on a typical desktop PC (Intel Pentium 4, 3.4 GHz, 1 GB RAM). The full example including the calculation of the elastic constants and the equation of state, takes 9 hours and 32 minutes.

  6. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods based on density-functional theory, fail to correctly capture this physics. Solution method: We construct and solve the Dyson's equation for the quasiparticle energies and wavefunctions within the GW approximation for the electron self-energy. We additionally construct and solve the Bethe-Salpeter equation for the correlated electron-hole (exciton) wavefunctions and excitation energies. Restrictions: The material size is limited in practice by the computational resources available. Materials with up to 500 atoms per periodic cell can be studied on large HPCs. Additional comments: The distribution file for this program is approximately 110 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: 1-1000 minutes (depending greatly on system size and processor number).

  7. Glenn Heat Transfer Simulation and Solver Graphical User Interface: Development and Testing

    NASA Technical Reports Server (NTRS)

    Kardamis, Joseph R.

    2004-01-01

    In the Tui ine Branch of the Turbomachinery and Propulsion Systems Division, researching and developing efficient turbine aerothermodynamics technologies is the main objective. Creating effective turbines for jet engines is a process which, if based purely on physical experimental testing, would be extremely expensive. It is for this reason, and also for the reasons of speed and ease, that the Turbine Branch spends a large amount of effort working with simulations of turbines. Specifically, they focus their work on two main fields: Computational Field Dynamics (CFD), and Experimental data analysis. The experimental field involves comparing experimental results to simulated results, whereas the CFD field involves running these simulations. The simulations are applied to aerodynamics and heat transfer cases, for both steady and unsteady flow conditions. By and large this work is applied to the domain of flow and heat transfer in axial turbines. The main application used to run these heat flow simulations is GlennHT. This program, recently rewritten in FORTRAN 90, allows the user to input a job file which specifies all the necessary parameters needed to simulate flow through a user-defined grid. There are several other executables used as well, ranging in application from converting grid files to and from particular formats, to merging blocks in a connectivity file, to converting connectivity files to a GlennHT compatible format. All of these executables are run from the command line in a terminal; some of them have interactive prompts where the user must specify the files to be manipulated after the program starts, while others take all of their parameters from the command line. With this amount of variation comes a good deal of commands and formats to memorize, which can cause slower and less efficient work, as users may forget how to execute a certain program, or not remember the pathnames of the files they wish to use. Two years ago, steps were made to expedite this process with a graphical user interface (GUI) that combines the functionality of all the executables along with adding some new functionality, such as residuals graphing and boundary conditions creation. Upon my beginning here at Glenn, many parts of the GUI, which was developed in Java, were nonfunctional. There were also issues with cross-platforming, as systems in the branch were transitioning from Silicon Graphics (SGI) machines to Linux machines. My goals this summer are to finish the parts of the GUI that are not yet completed, fix parts that did not work correctly, expand the functionality to include other useful features, such as grid surface highlighting, and make the system compatible with both Linux and SGI. I will also be heavily testing the system and providing sufficient documentation on how to use the GUI, as no such documentation existed previously.

  8. Numerical simulation code for self-gravitating Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Madarassy, Enikő J. M.; Toth, Viktor T.

    2013-04-01

    We completed the development of simulation code that is designed to study the behavior of a conjectured dark matter galactic halo that is in the form of a Bose-Einstein Condensate (BEC). The BEC is described by the Gross-Pitaevskii equation, which can be solved numerically using the Crank-Nicholson method. The gravitational potential, in turn, is described by Poisson’s equation, that can be solved using the relaxation method. Our code combines these two methods to study the time evolution of a self-gravitating BEC. The inefficiency of the relaxation method is balanced by the fact that in subsequent time iterations, previously computed values of the gravitational field serve as very good initial estimates. The code is robust (as evidenced by its stability on coarse grids) and efficient enough to simulate the evolution of a system over the course of 109 years using a finer (100×100×100) spatial grid, in less than a day of processor time on a contemporary desktop computer. Catalogue identifier: AEOR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5248 No. of bytes in distributed program, including test data, etc.: 715402 Distribution format: tar.gz Programming language: C++ or FORTRAN. Computer: PCs or workstations. Operating system: Linux or Windows. Classification: 1.5. Nature of problem: Simulation of a self-gravitating Bose-Einstein condensate by simultaneous solution of the Gross-Pitaevskii and Poisson equations in three dimensions. Solution method: The Gross-Pitaevskii equation is solved numerically using the Crank-Nicholson method; Poisson’s equation is solved using the relaxation method. The time evolution of the system is governed by the Gross-Pitaevskii equation; the solution of Poisson’s equation at each time step is used as an initial estimate for the next time step, which dramatically increases the efficiency of the relaxation method. Running time: Depends on the chosen size of the problem. On a typical personal computer, a 100×100×100 grid can be solved with a time span of 10 Gyr in approx. a day of running time.

  9. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  10. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data.

    PubMed

    Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  11. Linux VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with

  12. Mass Storage System - Gyrfalcon | High-Performance Computing | NREL

    Science.gov Websites

    . At the command line of one of Peregrine's login nodes, enter one of the following commands to copy directory.tgz /mss/ Option 3: The rsync command compares one directory to another and makes > Option 4: The simple Linux cp command can be used to copy a file from one directory to another

  13. Spare a Little Change? Towards a 5-Nines Internet in 250 Lines of Code

    DTIC Science & Technology

    2011-05-01

    NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie Mellon University,School of Computer Science,Pittsburgh,PA,15213 8. PERFORMING ...Std Z39-18 Keywords: Internet reliability, BGP performance , Quagga This document includes excerpts of the source code for the Linux operating system...Behavior and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . .  .. Other Related Work

  14. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    PubMed

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  15. End-To-End performance test of the LINC-NIRVANA Wavefront-Sensor system.

    NASA Astrophysics Data System (ADS)

    Berwein, Juergen; Bertram, Thomas; Conrad, Al; Briegel, Florian; Kittmann, Frank; Zhang, Xiangyu; Mohr, Lars

    2011-09-01

    LINC-NIRVANA is an imaging Fizeau interferometer, for use in near infrared wavelengths, being built for the Large Binocular Telescope. Multi-conjugate adaptive optics (MCAO) increases the sky coverage and the field of view over which diffraction limited images can be obtained. For its MCAO implementation, Linc-Nirvana utilizes four total wavefront sensors; each of the two beams is corrected by both a ground-layer wavefront sensor (GWS) and a high-layer wavefront sensor (HWS). The GWS controls the adaptive secondary deformable mirror (DM), which is based on an DSP slope computing unit. Whereas the HWS controls an internal DM via computations provided by an off-the-shelf multi-core Linux system. Using wavefront sensor data collected from a prior lab experiment, we have shown via simulation that the Linux based system is sufficient to operate at 1kHz, with jitter well below the needs of the final system. Based on that setup we tested the end-to-end performance and latency through all parts of the system which includes the camera, the wavefront controller, and the deformable mirror. We will present our loop control structure and the results of those performance tests.

  16. NGSANE: a lightweight production informatics framework for high-throughput data analysis.

    PubMed

    Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C

    2014-05-15

    The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.

  17. Dugong: a Docker image, based on Ubuntu Linux, focused on reproducibility and replicability for bioinformatics analyses.

    PubMed

    Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R

    2018-02-01

    This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.nunes@ufabc.edu.br. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. High-performance data processing using distributed computing on the SOLIS project

    NASA Astrophysics Data System (ADS)

    Wampler, Stephen

    2002-12-01

    The SOLIS solar telescope collects data at a high rate, resulting in 500 GB of raw data each day. The SOLIS Data Handling System (DHS) has been designed to quickly process this data down to 156 GB of reduced data. The DHS design uses pools of distributed reduction processes that are allocated to different observations as needed. A farm of 10 dual-cpu Linux boxes contains the pools of reduction processes. Control is through CORBA and data is stored on a fibre channel storage area network (SAN). Three other Linux boxes are responsible for pulling data from the instruments using SAN-based ringbuffers. Control applications are Java-based while the reduction processes are written in C++. This paper presents the overall design of the SOLIS DHS and provides details on the approach used to control the pooled reduction processes. The various strategies used to manage the high data rates are also covered.

  19. BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.

    PubMed

    Abrahamsson, Erik; Plotkin, Steven S

    2009-09-01

    Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.

  20. PyFDAP: automated analysis of fluorescence decay after photoconversion (FDAP) experiments.

    PubMed

    Bläßle, Alexander; Müller, Patrick

    2015-03-15

    We developed the graphical user interface PyFDAP for the fitting of linear and non-linear decay functions to data from fluorescence decay after photoconversion (FDAP) experiments. PyFDAP structures and analyses large FDAP datasets and features multiple fitting and plotting options. PyFDAP was written in Python and runs on Ubuntu Linux, Mac OS X and Microsoft Windows operating systems. The software, a user guide and a test FDAP dataset are freely available for download from http://people.tuebingen.mpg.de/mueller-lab. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. NREL's OpenStudio Helps Design More Efficient Buildings (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-07-01

    The National Renewable Energy Laboratory (NREL) has created the OpenStudio software platform that makes it easier for architects and engineers to evaluate building energy efficiency measures throughout the design process. OpenStudio makes energy modeling more accessible and affordable, helping professionals to design structures with lower utility bills and less carbon emissions, resulting in a healthier environment. OpenStudio includes a user-friendly application suite that makes the U.S. Department of Energy's EnergyPlus and Radiance simulation engines easier to use for whole building energy and daylighting performance analysis. OpenStudio is freely available and runs on Windows, Mac, and Linux operating systems.

  2. Daytime Water Detection by Fusing Multiple Cues for Autonomous Off-Road Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, A. L.; Matthies, L. H.; Huertas, A.

    2004-01-01

    Detecting water hazards is a significant challenge to unmanned ground vehicle autonomous off-road navigation. This paper focuses on detecting the presence of water during the daytime using color cameras. A multi-cue approach is taken. Evidence of the presence of water is generated from color, texture, and the detection of reflections in stereo range data. A rule base for fusing water cues was developed by evaluating detection results from an extensive archive of data collection imagery containing water. This software has been implemented into a run-time passive perception subsystem and tested thus far under Linux on a Pentium based processor.

  3. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  4. CORSET: Service-Oriented Resource Management System in Linux

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Jae; Kim, Chei-Yol; Jung, Sung-In

    Generally, system resources are not enough for many running services and applications in a system. And services are more important than single process in real world and they have different priority or importance. So each service should be treated with discrimination in aspect of system resources. But administrator can't guarantee the specific service has proper resources in unsettled workload situation because many processes are in race condition. So, we suppose the service-oriented resource management subsystem to resolve upper problems. It guarantees the performance or QoS of the specific service in changeable workload situation by satisfying the minimum resource requirement for the service.

  5. AtomicJ: An open source software for analysis of force curves

    NASA Astrophysics Data System (ADS)

    Hermanowicz, Paweł; Sarna, Michał; Burda, Kvetoslava; Gabryś, Halina

    2014-06-01

    We present an open source Java application for analysis of force curves and images recorded with the Atomic Force Microscope. AtomicJ supports a wide range of contact mechanics models and implements procedures that reduce the influence of deviations from the contact model. It generates maps of mechanical properties, including maps of Young's modulus, adhesion force, and sample height. It can also calculate stacks, which reveal how sample's response to deformation changes with indentation depth. AtomicJ analyzes force curves concurrently on multiple threads, which allows for high speed of analysis. It runs on all popular operating systems, including Windows, Linux, and Macintosh.

  6. libRoadRunner: a high performance SBML simulation and analysis library

    PubMed Central

    Somogyi, Endre T.; Bouteiller, Jean-Marie; Glazier, James A.; König, Matthias; Medley, J. Kyle; Swat, Maciej H.; Sauro, Herbert M.

    2015-01-01

    Motivation: This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. Results: libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB (www.mathworks.com) and SciPy (http://www.scipy.org/), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. Availability and implementation: libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. Contacts: hsauro@u.washington.edu or somogyie@indiana.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26085503

  7. libRoadRunner: a high performance SBML simulation and analysis library.

    PubMed

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.

  8. The Invar tensor package: Differential invariants of Riemann

    NASA Astrophysics Data System (ADS)

    Martín-García, J. M.; Yllanes, D.; Portugal, R.

    2008-10-01

    The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.

  9. A new version of Scilab software package for the study of dynamical systems

    NASA Astrophysics Data System (ADS)

    Bordeianu, C. C.; Felea, D.; Beşliu, C.; Jipa, Al.; Grossu, I. V.

    2009-11-01

    This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations. New version program summaryProgram title: Chaos v2.0 Catalogue identifier: AEAP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1275 No. of bytes in distributed program, including test data, etc.: 7135 Distribution format: tar.gz Programming language: Scilab 5.1.1. Scilab 5.1.1 should be installed before running the program. Information about the installation can be found at http://wiki.scilab.org/howto/install/windows. Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 150 Megabytes Classification: 6.2 Catalogue identifier of previous version: AEAP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 788 Does the new version supersede the previous version?: Yes Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations for the study of chaotic flows. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincare sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Numerical solving of iterative equations for the study of maps and fractals. Reasons for new version: The program has been updated to use the new version 5.1.1 of Scilab with new graphical capabilities [2]. Moreover, new use cases have been added which make the handling of the program easier and more efficient. Summary of revisions: A new use case concerning coupled predator-prey models has been added [3]. Three new use cases concerning fractals (Sierpinsky gasket, Barnsley's Fern and Tree) have been added [3]. The graphical user interface (GUI) of the program has been reconstructed to include the new use cases. The program has been updated to use Scilab 5.1.1 with the new graphical capabilities. Additional comments: The program package contains 12 subprograms. interface.sce - the graphical user interface (GUI) that permits the choice of a routine as follows 1.sci - Lorenz dynamical system 2.sci - Chua dynamical system 3.sci - Rosler dynamical system 4.sci - Henon map 5.sci - Lyapunov exponents for Lorenz dynamical system 6.sci - Lyapunov exponent for the logistic map 7.sci - Shannon entropy for the logistic map 8.sci - Coupled predator-prey model 1f.sci - Sierpinsky gasket 2f.sci - Barnsley's Fern 3f.sci - Barnsley's Tree Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE, Lyapunov exponents calculation and fractals. References: C.C. Bordeianu, C. Besliu, Al. Jipa, D. Felea, I. V. Grossu, Comput. Phys. Comm. 178 (2008) 788. S. Campbell, J.P. Chancelier, R. Nikoukhah, Modeling and Simulation in Scilab/Scicos, Springer, 2006. R.H. Landau, M.J. Paez, C.C. Bordeianu, A Survey of Computational Physics, Introductory Computational Science, Princeton University Press, 2008.

  10. Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  11. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  12. The Roots of Beowulf

    NASA Technical Reports Server (NTRS)

    Fischer, James R.

    2014-01-01

    The first Beowulf Linux commodity cluster was constructed at NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end computing. In fact, the conditions within Goddard that brought the idea into being were shaped by rich historical roots, strategic pressures brought on by the ramp up of the Federal High-Performance Computing and Communications Program, growth of the open software movement, microprocessor performance trends, and the vision of key technologists. This multifaceted story is told here for the first time from the point of view of NASA project management.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KURTZER, GREGORY; MURIKI, KRISHNA

    Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less

  14. Tuning Linux to meet real time requirements

    NASA Astrophysics Data System (ADS)

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  15. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are designed for Linux operating systems (OS), the arrival of the WindowsHPC 2008 OS provides the opportunity to evaluate the use of a new platform on which to develop and port climate and earth science models. In particular, we are evaluating Microsoft's Visual Studio Integrated Developer Environment to determine its appropriateness for the climate modeling community. In the initial phases of this project, we have ported GEOS-5, WRF, GISS ModelE, and GFS to Linux on a CX1 and are in the process of porting WRF and ModelE to WindowsHPC 2008. Initial tests on the CX1 Linux OS indicate favorable comparisons in terms of performance and consistency of scientific results when compared with experiments executed on NASA high end systems. As in the past, NASA's large clusters will continue to be an important part of our objectives. We envision a seamless environment in which an investigator performs model development and testing on a desktop system and can seamlessly transfer execution to supercomputer clusters for production.

  16. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.

    PubMed

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.

  17. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  18. CPsuperH2.0: An improved computational tool for Higgs phenomenology in the MSSM with explicit CP violation

    NASA Astrophysics Data System (ADS)

    Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.

    2009-02-01

    We describe the Fortran code CPsuperH2.0, which contains several improvements and extensions of its predecessor CPsuperH. It implements improved calculations of the Higgs-boson pole masses, notably a full treatment of the 4×4 neutral Higgs propagator matrix including the Goldstone boson and a more complete treatment of threshold effects in self-energies and Yukawa couplings, improved treatments of two-body Higgs decays, some important three-body decays, and two-loop Higgs-mediated contributions to electric dipole moments. CPsuperH2.0 also implements an integrated treatment of several B-meson observables, including the branching ratios of B→μμ, B→ττ, B→τν, B→Xγ and the latter's CP-violating asymmetry A, and the supersymmetric contributions to the Bs,d0-B¯s,d0 mass differences. These additions make CPsuperH2.0 an attractive integrated tool for analyzing supersymmetric CP and flavour physics as well as searches for new physics at high-energy colliders such as the Tevatron, LHC and linear colliders. Program summaryProgram title: CPsuperH2.0 Catalogue identifier: ADSR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 290 No. of bytes in distributed program, including test data, etc.: 89 540 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC running under Linux and computers in Unix environment Operating system: Linux RAM: 32 Mbytes Classification: 11.1 Catalogue identifier of the previous version: ADSR_v1_0 Journal reference of the previous version: CPC 156 (2004) 283 Does the new version supersede the previous version?: Yes Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. The new implementations include a full treatment of the 4×4(2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, two-loop Higgs-mediated contributions to electric dipole moments, and an integrated treatment of several B-meson observables. Solution method: One-dimensional numerical integration for several Higgs-decay modes, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide a coherent numerical framework which calculates consistently observables for both low- and high-energy experiments. Summary of revisions: Improved treatment of Higgs-boson masses and propagators. Improved treatment of Higgs-boson couplings and decays. Higgs-mediated two-loop electric dipole moments. B-meson observables. Running time: Less than 0.1 seconds. The program may be obtained from http://www.hep.man.ac.uk/u/jslee/CPsuperH.html.

  19. Linux thin-client conversion in a large cardiology practice: initial experience.

    PubMed

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  20. GANGA: A tool for computational-task management and easy access to Grid resources

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.

    2009-11-01

    In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL No. of lines in distributed program, including test data, etc.: 224 590 No. of bytes in distributed program, including test data, etc.: 14 365 315 Distribution format: tar.gz Programming language: Python Computer: personal computers, laptops Operating system: Linux/Unix RAM: 1 MB Classification: 6.2, 6.5 Nature of problem: Management of computational tasks for scientific applications on heterogenous distributed systems, including local, batch farms, opportunistic clusters and Grids. Solution method: High-level job management interface, including command line, scripting and GUI components. Restrictions: Access to the distributed resources depends on the installed, 3rd party software such as batch system client or Grid user interface.

  1. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  2. Establishing Linux Clusters for High-Performance Computing (HPC) at NPS

    DTIC Science & Technology

    2004-09-01

    52 e. Intel Roll..................................................................................53 f. Area51 Roll...results of generating md5summ for Area51 roll. All the file information is available. This number can be used to be checked against the number that the...vendor provides fro the particular piece of software. ......51 Figure 22 The given md5summ for Area51 roll form the download site. This number can

  3. Stonix, Version 0.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-05-13

    STONIX is a program for configuring UNIX and Linux computer operating systems. It applies configurations based on the guidance from publicly accessible resources such as: NSA Guides, DISA STIGs, the Center for Internet Security (CIS), USGCB and vendor security documentation. STONIX is written in the Python programming language using the QT4 and PyQT4 libraries to provide a GUI. The code is designed to be easily extensible and customizable.

  4. Generating Computer Forensic Super Timelines under Linux: A Comprehensive Guide for Windows-based Disk Images

    DTIC Science & Technology

    2011-10-01

    mémorandum technique fournit donc une description détaillée de l’approche des auteurs pour produire des calendriers des événements plus détaillés dans...SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT

  5. Development and Testing of a High-Speed Real-Time Kinematic Precise DGPS Positioning System Between Two Aircraft

    DTIC Science & Technology

    2006-09-01

    work-horse for this thesis. He spent hours writing some of the more tedious code, and as much time helping me learn C++ and Linux . He was always there...compared with C++, and the need to use Linux as the operating system, the filter was coded using C++ and KDevelop [28] in SUSE LINUX Professional 9.2 [42...The driving factor for using Linux was the operating system’s ability to access the serial ports in a reliable fashion. Under the original MATLAB® and

  6. GeoBoost: accelerating research involving the geospatial metadata of virus GenBank records.

    PubMed

    Tahsin, Tasnia; Weissenbacher, Davy; O'Connor, Karen; Magge, Arjun; Scotch, Matthew; Gonzalez-Hernandez, Graciela

    2018-05-01

    GeoBoost is a command-line software package developed to address sparse or incomplete metadata in GenBank sequence records that relate to the location of the infected host (LOIH) of viruses. Given a set of GenBank accession numbers corresponding to virus GenBank records, GeoBoost extracts, integrates and normalizes geographic information reflecting the LOIH of the viruses using integrated information from GenBank metadata and related full-text publications. In addition, to facilitate probabilistic geospatial modeling, GeoBoost assigns probability scores for each possible LOIH. Binaries and resources required for running GeoBoost are packed into a single zipped file and freely available for download at https://tinyurl.com/geoboost. A video tutorial is included to help users quickly and easily install and run the software. The software is implemented in Java 1.8, and supported on MS Windows and Linux platforms. gragon@upenn.edu. Supplementary data are available at Bioinformatics online.

  7. Artificial intelligence in the service of system administrators

    NASA Astrophysics Data System (ADS)

    Haen, C.; Barra, V.; Bonaccorsi, E.; Neufeld, N.

    2012-12-01

    The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks: critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called “self objectives” supposed to lead to “autonomic computing”. In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the multi agent approach we use, coupled with an “object oriented paradigm” architecture should increase our learning speed a lot and highlight relations between problems.

  8. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Multi-objective Calibration of DHSVM Based on Hydrologic Key Elements in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Liu, L.; Xu, Y. P.

    2017-12-01

    Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).

  11. Long-read sequencing data analysis for yeasts.

    PubMed

    Yue, Jia-Xing; Liti, Gianni

    2018-06-01

    Long-read sequencing technologies have become increasingly popular due to their strengths in resolving complex genomic regions. As a leading model organism with small genome size and great biotechnological importance, the budding yeast Saccharomyces cerevisiae has many isolates currently being sequenced with long reads. However, analyzing long-read sequencing data to produce high-quality genome assembly and annotation remains challenging. Here, we present a modular computational framework named long-read sequencing data analysis for yeasts (LRSDAY), the first one-stop solution that streamlines this process. Starting from the raw sequencing reads, LRSDAY can produce chromosome-level genome assembly and comprehensive genome annotation in a highly automated manner with minimal manual intervention, which is not possible using any alternative tool available to date. The annotated genomic features include centromeres, protein-coding genes, tRNAs, transposable elements (TEs), and telomere-associated elements. Although tailored for S. cerevisiae, we designed LRSDAY to be highly modular and customizable, making it adaptable to virtually any eukaryotic organism. When applying LRSDAY to an S. cerevisiae strain, it takes ∼41 h to generate a complete and well-annotated genome from ∼100× Pacific Biosciences (PacBio) running the basic workflow with four threads. Basic experience working within the Linux command-line environment is recommended for carrying out the analysis using LRSDAY.

  12. ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems

    PubMed Central

    Expósito, Roberto R.

    2018-01-01

    Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/. PMID:29608567

  13. ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Expósito, Roberto R

    2018-01-01

    Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/.

  14. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-06-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.

  15. myBrain: a novel EEG embedded system for epilepsy monitoring.

    PubMed

    Pinho, Francisco; Cerqueira, João; Correia, José; Sousa, Nuno; Dias, Nuno

    2017-10-01

    The World Health Organisation has pointed that a successful health care delivery, requires effective medical devices as tools for prevention, diagnosis, treatment and rehabilitation. Several studies have concluded that longer monitoring periods and outpatient settings might increase diagnosis accuracy and success rate of treatment selection. The long-term monitoring of epileptic patients through electroencephalography (EEG) has been considered a powerful tool to improve the diagnosis, disease classification, and treatment of patients with such condition. This work presents the development of a wireless and wearable EEG acquisition platform suitable for both long-term and short-term monitoring in inpatient and outpatient settings. The developed platform features 32 passive dry electrodes, analogue-to-digital signal conversion with 24-bit resolution and a variable sampling frequency from 250 Hz to 1000 Hz per channel, embedded in a stand-alone module. A computer-on-module embedded system runs a Linux ® operating system that rules the interface between two software frameworks, which interact to satisfy the real-time constraints of signal acquisition as well as parallel recording, processing and wireless data transmission. A textile structure was developed to accommodate all components. Platform performance was evaluated in terms of hardware, software and signal quality. The electrodes were characterised through electrochemical impedance spectroscopy and the operating system performance running an epileptic discrimination algorithm was evaluated. Signal quality was thoroughly assessed in two different approaches: playback of EEG reference signals and benchmarking with a clinical-grade EEG system in alpha-wave replacement and steady-state visual evoked potential paradigms. The proposed platform seems to efficiently monitor epileptic patients in both inpatient and outpatient settings and paves the way to new ambulatory clinical regimens as well as non-clinical EEG applications.

  16. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur. Phys. J. C 23 (2002) 311]. This program implements some of the more significant ones, enabling the simulation of central particle production through color singlet exchange between interacting protons or antiprotons. Method of solution: The Monte Carlo method is used to simulate all elementary 2→2 and 2→1 processes available in HERWIG. The color singlet exchanges implemented in DPEMC are implemented as functions reweighting the photon flux already present in HERWIG. Restriction on the complexity of the problem: The program relying extensively on HERWIG, the limitations are the same as in [G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour, L. Stanco, Comput. Phys. Comm. 67 (1992) 465; G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. Seymour, B. Webber, JHEP 0101 (2001) 010]. Typical running time: Approximate times on a 800 MHz Pentium III: 5-20 min per 10 000 unweighted events, depending on the process under consideration.

  17. Calculating the renormalisation group equations of a SUSY model with Susyno

    NASA Astrophysics Data System (ADS)

    Fonseca, Renato M.

    2012-10-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features: Susyno contains functions that (a) calculate the Lagrangian of supersymmetric models and (b) calculate some group theoretical quantities. Some of these functions are available to the user and can be freely used. A built-in help system provides detailed information. Running time: Tests were made using a computer with an Intel Core i5 760 CPU, running under Ubuntu 11.04 and with Mathematica 8.0.1 installed. Using the option to suppress printing, the one- and two-loop beta functions of the MSSM were obtained in 2.5 s (NMSSM: 5.4 s). Note that the running time scales up very quickly with the total number of fields in the model. References: [1] S.P. Martin and M.T. Vaughn, Phys. Rev. D 50 (1994) 2282. [Erratum-ibid D 78 (2008) 039903] [arXiv:hep-ph/9311340]. [2] Y. Yamada, Phys. Rev. D 50 (1994) 3537 [arXiv:hep-ph/9401241].

  18. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.

    2013-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  19. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  20. SLURM: Simplex Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  1. 77 FR 5864 - BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... current and accurate information concerning the securities of BluePoint Linux Software Corp. because it...

  2. Biogem: an effective tool-based approach for scaling up open source software development in bioinformatics.

    PubMed

    Bonnal, Raoul J P; Aerts, Jan; Githinji, George; Goto, Naohisa; MacLean, Dan; Miller, Chase A; Mishima, Hiroyuki; Pagani, Massimiliano; Ramirez-Gonzalez, Ricardo; Smant, Geert; Strozzi, Francesco; Syme, Rob; Vos, Rutger; Wennblom, Trevor J; Woodcroft, Ben J; Katayama, Toshiaki; Prins, Pjotr

    2012-04-01

    Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software generator, tools and tight web integration, is an improved general model for scaling up collaborative open source software development in bioinformatics. Biogem and modules are free and are OSS. Biogem runs on all systems that support recent versions of Ruby, including Linux, Mac OS X and Windows. Further information at http://www.biogems.info. A tutorial is available at http://www.biogems.info/howto.html bonnal@ingm.org.

  3. Electronics for a highly segmented electromagnetic calorimeter prototype

    NASA Astrophysics Data System (ADS)

    Fehlker, D.; Alme, J.; van den Brink, A.; de Haas, A. P.; Nooren, G.-J.; Reicher, M.; Röhrich, D.; Rossewij, M.; Ullaland, K.; Yang, S.

    2013-03-01

    A prototype of a highly segmented electromagnetic calorimeter has been developed. The detector tower is made of 24 layers of PHASE2/MIMOSA23 silicon sensors sandwiched between tungsten plates, with 4 sensors per layer, a total of 96 MIMOSA sensors, resulting in 39 MPixels for the complete prototype detector tower. The paper focuses on the electronics of this calorimeter prototype. Two detector readout and control systems are used, each containing two Spartan 6 and one Virtex 6 FPGA, running embedded Linux, each system serving 12 detector layers. In 550 ms a total of 4 Gbytes of data is read from the detector, stored in memory on the electronics and then shipped to the DAQ system via Gigabit ethernet.

  4. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    PubMed

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  5. Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development

    NASA Technical Reports Server (NTRS)

    Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu

    2008-01-01

    The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.

  6. VisIVO: A Tool for the Virtual Observatory and Grid Environment

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Costa, A.; Larsson, B.; Gheller, C.; Pasian, F.; Smareglia, R.

    2007-10-01

    We present the new features of VisIVO, software for the visualization and analysis of astrophysical data which can be retrieved from the Virtual Observatory framework and used for cosmological simulations running both on Windows and GNU/Linux platforms. VisIVO is VO standards compliant and supports the most important astronomical data formats such as FITS, HDF5 and VOTables. It is free software and can be downloaded from the web site http://visivo.cineca.it. VisIVO can interoperate with other astronomical VO compliant tools through PLASTIC (PLatform for AStronomical Tool InterConnection). This feature allows VisIVO to share data with many other astronomical packages to further analyze the loaded data.

  7. GRIL: genome rearrangement and inversion locator.

    PubMed

    Darling, Aaron E; Mau, Bob; Blattner, Frederick R; Perna, Nicole T

    2004-01-01

    GRIL is a tool to automatically identify collinear regions in a set of bacterial-size genome sequences. GRIL uses three basic steps. First, regions of high sequence identity are located. Second, some of these regions are filtered based on user-specified criteria. Finally, the remaining regions of sequence identity are used to define significant collinear regions among the sequences. By locating collinear regions of sequence, GRIL provides a basis for multiple genome alignment using current alignment systems. GRIL also provides a basis for using current inversion distance tools to infer phylogeny. GRIL is implemented in C++ and runs on any x86-based Linux or Windows platform. It is available from http://asap.ahabs.wisc.edu/gril

  8. gkmSVM: an R package for gapped-kmer SVM

    PubMed Central

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A.

    2016-01-01

    Summary: We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. Availability and Implementation: gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm Contact: mghandi@gmail.com or mbeer@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153639

  9. FreeTure: A Free software to capTure meteors for FRIPON

    NASA Astrophysics Data System (ADS)

    Audureau, Yoan; Marmo, Chiara; Bouley, Sylvain; Kwon, Min-Kyung; Colas, François; Vaubaillon, Jérémie; Birlan, Mirel; Zanda, Brigitte; Vernazza, Pierre; Caminade, Stephane; Gattecceca, Jérôme

    2014-02-01

    The Fireball Recovery and Interplanetary Observation Network (FRIPON) is a French project started in 2014 which will monitor the sky, using 100 all-sky cameras to detect meteors and to retrieve related meteorites on the ground. There are several detection software all around. Some of them are proprietary. Also, some of them are hardware dependent. We present here the open source software for meteor detection to be installed on the FRIPON network's stations. The software will run on Linux with gigabit Ethernet cameras and we plan to make it cross platform. This paper is focused on the meteor detection method used for the pipeline development and the present capabilities.

  10. GePEToS: A Geant4 Monte Carlo Simulation Package for Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Jan, S.; Collot, J.; Gallin-Martel, M.-L.; Martin, P.; Mayet, F.; Tournefier, E.

    2005-02-01

    GePEToS is a simulation framework developed over the last few years for assessing the instrumental performance of future positron emission tomography (PET) scanners. It is based on Geant4, written in object-oriented C++ and runs on Linux platforms. The validity of GePEToS has been tested on the well-known Siemens ECAT EXACT HR+ camera. The results of two application examples are presented: the design optimization of a liquid Xe /spl mu/PET camera dedicated to small animal imaging as well as the evaluation of the effect of a strong axial magnetic field on the image resolution of a Concorde P4 /spl mu/PET camera.

  11. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  12. HELAC-PHEGAS: A generator for all parton level processes

    NASA Astrophysics Data System (ADS)

    Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata

    2009-10-01

    The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3 n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, … up to (n-1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, pp¯ and pp collisions. Reasons for new version: Substantial improvements, major functionality upgrade. Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters. Running time: Depending on the process studied. Usually from seconds to hours. References:A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306. C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247. URL: http://www.cern.ch/helac-phegas.

  13. A Disk-Based System for Producing and Distributing Science Products from MODIS

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael

    2007-01-01

    Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.

  14. Setting Up Git Software Tool on Linux | High-Performance Computing | NREL

    Science.gov Websites

    system. Before you can get started using the github.nrel.gov git repos, you'll have to do some basic shell (SSH) keys created on those systems. If this is the case, for more information, see using the git Steps - Using a Remote Git Repository Now you have all the basic configuration for using git with a

  15. Research on numerical control system based on S3C2410 and MCX314AL

    NASA Astrophysics Data System (ADS)

    Ren, Qiang; Jiang, Tingbiao

    2008-10-01

    With the rapid development of micro-computer technology, embedded system, CNC technology and integrated circuits, numerical control system with powerful functions can be realized by several high-speed CPU chips and RISC (Reduced Instruction Set Computing) chips which have small size and strong stability. In addition, the real-time operating system also makes the attainment of embedded system possible. Developing the NC system based on embedded technology can overcome some shortcomings of common PC-based CNC system, such as the waste of resources, low control precision, low frequency and low integration. This paper discusses a hardware platform of ENC (Embedded Numerical Control) system based on embedded processor chip ARM (Advanced RISC Machines)-S3C2410 and DSP (Digital Signal Processor)-MCX314AL and introduces the process of developing ENC system software. Finally write the MCX314AL's driver under the embedded Linux operating system. The embedded Linux operating system can deal with multitask well moreover satisfy the real-time and reliability of movement control. NC system has the advantages of best using resources and compact system with embedded technology. It provides a wealth of functions and superior performance with a lower cost. It can be sure that ENC is the direction of the future development.

  16. Rapid analysis of protein backbone resonance assignments using cryogenic probes, a distributed Linux-based computing architecture, and an integrated set of spectral analysis tools.

    PubMed

    Monleón, Daniel; Colson, Kimberly; Moseley, Hunter N B; Anklin, Clemens; Oswald, Robert; Szyperski, Thomas; Montelione, Gaetano T

    2002-01-01

    Rapid data collection, spectral referencing, processing by time domain deconvolution, peak picking and editing, and assignment of NMR spectra are necessary components of any efficient integrated system for protein NMR structure analysis. We have developed a set of software tools designated AutoProc, AutoPeak, and AutoAssign, which function together with the data processing and peak-picking programs NMRPipe and Sparky, to provide an integrated software system for rapid analysis of protein backbone resonance assignments. In this paper we demonstrate that these tools, together with high-sensitivity triple resonance NMR cryoprobes for data collection and a Linux-based computer cluster architecture, can be combined to provide nearly complete backbone resonance assignments and secondary structures (based on chemical shift data) for a 59-residue protein in less than 30 hours of data collection and processing time. In this optimum case of a small protein providing excellent spectra, extensive backbone resonance assignments could also be obtained using less than 6 hours of data collection and processing time. These results demonstrate the feasibility of high throughput triple resonance NMR for determining resonance assignments and secondary structures of small proteins, and the potential for applying NMR in large scale structural proteomics projects.

  17. TomoEED: Fast Edge-Enhancing Denoising of Tomographic Volumes.

    PubMed

    Moreno, J J; Martínez-Sánchez, A; Martínez, J A; Garzón, E M; Fernández, J J

    2018-05-29

    TomoEED is an optimized software tool for fast feature-preserving noise filtering of large 3D tomographic volumes on CPUs and GPUs. The tool is based on the anisotropic nonlinear diffusion method. It has been developed with special emphasis in the reduction of the computational demands by using different strategies, from the algorithmic to the high performance computing perspectives. TomoEED manages to filter large volumes in a matter of minutes in standard computers. TomoEED has been developed in C. It is available for Linux platforms at http://www.cnb.csic.es/%7ejjfernandez/tomoeed. gmartin@ual.es, JJ.Fernandez@csic.es. Supplementary data are available at Bioinformatics online.

  18. Controlador para un Reloj GPS de Referencia en el Protocolo NTP

    NASA Astrophysics Data System (ADS)

    Hauscarriaga, F.; Bareilles, F. A.

    The synchronization between computers in a local network plays a very important role on enviroments similar to IAR. Calculations for exact time are needed before, during and after an observation. For this purpose the IAR's GNU/Linux Software Development Team implemented a driver inside NTP protocol (an internet standard for time synchronization of computers) for a GPS receiver acquired a few years ago by IAR, which did not have support in such protocol. Today our Institute has a stable and reliable time base synchronized to atomic clocks on board GPS Satellites according to computers's synchronization standard, offering precise time services to all scientific community and particularly to the University of La Plata. FULL TEXT IN SPANISH

  19. Three-dimensional interactive Molecular Dynamics program for the study of defect dynamics in crystals

    NASA Astrophysics Data System (ADS)

    Patriarca, M.; Kuronen, A.; Robles, M.; Kaski, K.

    2007-01-01

    The study of crystal defects and the complex processes underlying their formation and time evolution has motivated the development of the program ALINE for interactive molecular dynamics experiments. This program couples a molecular dynamics code to a Graphical User Interface and runs on a UNIX-X11 Window System platform with the MOTIF library, which is contained in many standard Linux releases. ALINE is written in C, thus giving the user the possibility to modify the source code, and, at the same time, provides an effective and user-friendly framework for numerical experiments, in which the main parameters can be interactively varied and the system visualized in various ways. We illustrate the main features of the program through some examples of detection and dynamical tracking of point-defects, linear defects, and planar defects, such as stacking faults in lattice-mismatched heterostructures. Program summaryTitle of program:ALINE Catalogue identifier:ADYJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYJ_v1_0 Program obtainable from: CPC Program Library, Queen University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers:DEC ALPHA 300, Intel i386 compatible computers, G4 Apple Computers Installations:Laboratory of Computational Engineering, Helsinki University of Technology, Helsinki, Finland Operating systems under which the program has been tested:True64 UNIX, Linux-i386, Mac OS X 10.3 and 10.4 Programming language used:Standard C and MOTIF libraries Memory required to execute with typical data:6 Mbytes but may be larger depending on the system size No. of lines in distributed program, including test data, etc.:16 901 No. of bytes in distributed program, including test data, etc.:449 559 Distribution format:tar.gz Nature of physical problem:Some phenomena involving defects take place inside three-dimensional crystals at times which can be hardly predicted. For this reason they are difficult to detect and track even within numerical experiments, especially when one is interested in studying their dynamical properties and time evolution. Furthermore, traditional simulation methods require the storage of a huge amount of data which in turn may imply a long work for their analysis. Method of solution:Simplifications of the simulation work described above strongly depend also on the computer performance. It has now become possible to realize some of such simplifications thanks to the real possibility of using interactive programs. The solution proposed here is based on the development of an interactive graphical simulation program both for avoiding large storage of data and the subsequent elaboration and analysis as well as for visualizing and tracking many phenomena inside three-dimensional samples. However, the full computational power of traditional simulation programs may not be available in general in programs with graphical user interfaces, due to their interactive nature. Nevertheless interactive programs can still be very useful for detecting processes difficult to visualize, restricting the range or making a fine tuning of the parameters, and tailoring the faster programs toward precise targets. Restrictions on the complexity of the problem:The restrictions on the applicability of the program are related to the computer resources available. The graphical interface and interactivity demand computational resources that depend on the particular numerical simulation to be performed. To preserve a balance between speed and resources, the choice of the number of atoms to be simulated is critical. With an average current computer, simulations of systems with more than 10 5 atoms may not be easily feasible on an interactive scheme. Another restriction is related to the fact that the program was originally designed to simulate systems in the solid phase, so that problems in the simulation may occur if some particular physical quantities are computed beyond the melting point. Typical running time:It depends on the machine architecture, system size, and user needs. Unusual features of the program:In the program, besides the window in which the system is represented in real space, an additional graphical window presenting the real time distribution histogram for different physical variables (such as kinetic or potential energy) is included. Such tool is very interesting for making demonstrative numerical experiments for teaching purposes as well as for research, e.g., for detecting and tracking crystal defects. The program includes: an initial condition builder, an interactive display of the simulation, a set of tools which allow the user to filter through different physical quantities the information—either displayed in real time or printed in the output files—and to perform an efficient search of the interesting regions of parameter space.

  20. MC-TESTER: a universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    NASA Astrophysics Data System (ADS)

    Golonka, P.; Pierzchała, T.; Waş, Z.

    2004-02-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.

Top