Sample records for simple linux utility

  1. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  2. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-07-08

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.

  3. SLURM: Simplex Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  4. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less

  5. PMIX_Ring patch for SLURM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A. T.

    2014-04-20

    This code adds an implementation of PMIX_Ring to the existing PM12 Library in the SLURM open source software package (Simple Linux Utility for Resource Management). PMIX_Ring executes a particular communication pattern that is used to bootstrap connections between MPI processes in a parallel job.

  6. Development of an Autonomous Navigation Technology Test Vehicle

    DTIC Science & Technology

    2004-08-01

    as an independent thread on processors using the Linux operating system. The computer hardware selected for the nodes that host the MRS threads...communications system design. Linux was chosen as the operating system for all of the single board computers used on the Mule. Linux was specifically...used for system analysis and development. The simple realization of multi-thread processing and inter-process communications in Linux made it a

  7. minimega

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Fritz, John Floren

    2013-08-27

    Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.

  8. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  9. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  10. Abstract of talk for Silicon Valley Linux Users Group

    NASA Technical Reports Server (NTRS)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  11. Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual

    DTIC Science & Technology

    2005-02-01

    on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication

  12. Cross platform development using Delphi and Kylix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  13. Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  14. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  15. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  16. Real-time data collection in Linux: a case study.

    PubMed

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  17. Impact of the Shodan Computer Search Engine on Internet-facing Industrial Control System Devices

    DTIC Science & Technology

    2014-03-27

    bridge implementation. The transparent bridge is designed using a Raspberry Pi configured with Linux IPtables and bridge-utils to bridge the on board...Ethernet card and a second USB Ethernet adapter. A Raspberry Pi is a credit-card-sized single-board computer running a version of Debian Linux. There

  18. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    PubMed

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  19. A PC parallel port button box provides millisecond response time accuracy under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  20. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  1. A Framework for Automated Digital Forensic Reporting

    DTIC Science & Technology

    2009-03-01

    provide a simple way to extract local accounts from a full system image. Unix, Linux and the BSD variants store user accounts in the /etc/ passwd file...with hashes of the user passwords in the /etc/shadow file for linux or /etc/master.passwd for BSD. /etc/ passwd also contains mappings from usernames to... passwd file may not map directly to real-world names, it can be a crucial link in this eventual mapping. Following are two examples where it could prove

  2. SplicePlot: a utility for visualizing splicing quantitative trait loci.

    PubMed

    Wu, Eric; Nance, Tracy; Montgomery, Stephen B

    2014-04-01

    RNA sequencing has provided unprecedented resolution of alternative splicing and splicing quantitative trait loci (sQTL). However, there are few tools available for visualizing the genotype-dependent effects of splicing at a population level. SplicePlot is a simple command line utility that produces intuitive visualization of sQTLs and their effects. SplicePlot takes mapped RNA sequencing reads in BAM format and genotype data in VCF format as input and outputs publication-quality Sashimi plots, hive plots and structure plots, enabling better investigation and understanding of the role of genetics on alternative splicing and transcript structure. Source code and detailed documentation are available at http://montgomerylab.stanford.edu/spliceplot/index.html under Resources and at Github. SplicePlot is implemented in Python and is supported on Linux and Mac OS. A VirtualBox virtual machine running Ubuntu with SplicePlot already installed is also available.

  3. Collaboration rules.

    PubMed

    Evans, Philip; Wolf, Bob

    2005-01-01

    Corporate leaders seeking to boost growth, learning, and innovation may find the answer in a surprising place: the Linux open-source software community. Linux is developed by an essentially volunteer, self-organizing community of thousands of programmers. Most leaders would sell their grandmothers for workforces that collaborate as efficiently, frictionlessly, and creatively as the self-styled Linux hackers. But Linux is software, and software is hardly a model for mainstream business. The authors have, nonetheless, found surprising parallels between the anarchistic, caffeinated, hirsute world of Linux hackers and the disciplined, tea-sipping, clean-cut world of Toyota engineering. Specifically, Toyota and Linux operate by rules that blend the self-organizing advantages of markets with the low transaction costs of hierarchies. In place of markets' cash and contracts and hierarchies' authority are rules about how individuals and groups work together (with rigorous discipline); how they communicate (widely and with granularity); and how leaders guide them toward a common goal (through example). Those rules, augmented by simple communication technologies and a lack of legal barriers to sharing information, create rich common knowledge, the ability to organize teams modularly, extraordinary motivation, and high levels of trust, which radically lowers transaction costs. Low transaction costs, in turn, make it profitable for organizations to perform more and smaller transactions--and so increase the pace and flexibility typical of high-performance organizations. Once the system achieves critical mass, it feeds on itself. The larger the system, the more broadly shared the knowledge, language, and work style. The greater individuals' reputational capital, the louder the applause and the stronger the motivation. The success of Linux is evidence of the power of that virtuous circle. Toyota's success is evidence that it is also powerful in conventional companies.

  4. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  5. Development of a portable Linux-based ECG measurement and monitoring system.

    PubMed

    Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng

    2011-08-01

    This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.

  6. Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.

    PubMed

    Parker, Nicolas J; Parker, Andrew G

    2008-04-18

    The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.

  7. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  8. Setting Up the JBrowse Genome Browser

    PubMed Central

    Skinner, Mitchell E; Holmes, Ian H

    2010-01-01

    JBrowse is a web-based tool for visualizing genomic data. Unlike most other web-based genome browsers, JBrowse exploits the capabilities of the user's web browser to make scrolling and zooming fast and smooth. It supports the browsers used by almost all internet users, and is relatively simple to install. JBrowse can utilize multiple types of data in a variety of common genomic data formats, including genomic feature data in bioperl databases, GFF files, and BED files, and quantitative data in wiggle files. This unit describes how to obtain the JBrowse software, set it up on a Linux or Mac OS X computer running as a web server and incorporate genome annotation data from multiple sources into JBrowse. After completing the protocols described in this unit, the reader will have a web site that other users can visit to browse the genomic data. PMID:21154710

  9. Digital beacon receiver for ionospheric TEC measurement developed with GNU Radio

    NASA Astrophysics Data System (ADS)

    Yamamoto, M.

    2008-11-01

    A simple digital receiver named GNU Radio Beacon Receiver (GRBR) was developed for the satellite-ground beacon experiment to measure the ionospheric total electron content (TEC). The open-source software toolkit for the software defined radio, GNU Radio, is utilized to realize the basic function of the receiver and perform fast signal processing. The software is written in Python for a LINUX PC. The open-source hardware called Universal Software Radio Peripheral (USRP), which best matches the GNU Radio, is used as a front-end to acquire the satellite beacon signals of 150 and 400 MHz. The first experiment was successful as results from GRBR showed very good agreement to those from the co-located analog beacon receiver. Detailed design information and software codes are open at the URL http://www.rish.kyoto-u.ac.jp/digitalbeacon/.

  10. Soft control of scanning probe microscope with high flexibility.

    PubMed

    Liu, Zhenghui; Guo, Yuzheng; Zhang, Zhaohui; Zhu, Xing

    2007-01-01

    Most commercial scanning probe microscopes have multiple embedded digital microprocessors and utilize complex software for system control, which is not easily obtained or modified by researchers wishing to perform novel and special applications. In this paper, we present a simple and flexible control solution that just depends on software running on a single-processor personal computer with real-time Linux operating system to carry out all the control tasks including negative feedback, tip moving, data processing and user interface. In this way, we fully exploit the potential of a personal computer in calculating and programming, enabling us to manipulate the scanning probe as required without any special digital control circuits and related technical know-how. This solution has been successfully applied to a homemade ultrahigh vacuum scanning tunneling microscope and a multiprobe scanning tunneling microscope.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William Eugene

    These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly:more » the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.« less

  12. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  13. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  14. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  15. Raspberry Pi: a 35-dollar device for viewing DICOM images.

    PubMed

    Paiva, Omir Antunes; Moreira, Renata de Oliveira

    2014-01-01

    Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost.

  16. Raspberry Pi: a 35-dollar device for viewing DICOM images*

    PubMed Central

    Paiva, Omir Antunes; Moreira, Renata de Oliveira

    2014-01-01

    Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost. PMID:25741057

  17. The Effect of NUMA Tunings on CPU Performance

    NASA Astrophysics Data System (ADS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  18. PyEPL: a cross-platform experiment-programming library.

    PubMed

    Geller, Aaron S; Schlefer, Ian K; Sederberg, Per B; Jacobs, Joshua; Kahana, Michael J

    2007-11-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments forspatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL.

  19. PyEPL: A cross-platform experiment-programming library

    PubMed Central

    Geller, Aaron S.; Schleifer, Ian K.; Sederberg, Per B.; Jacobs, Joshua; Kahana, Michael J.

    2009-01-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments for spatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL. PMID:18183912

  20. A Business Case Study of Open Source Software

    DTIC Science & Technology

    2001-07-01

    LinuxPPC LinuxPPC www.linuxppc.com MandrakeSoft Linux -Mandrake www.linux-mandrake.com/ en / CLE Project CLE cle.linux.org.tw/CLE/e_index.shtml Red Hat... en Coyote Linux www2.vortech.net/coyte/coyte.htm MNIS www.mnis.fr Data-Portal www.data-portal.com Mr O’s Linux Emporium www.ouin.com DLX Linux www.wu...1998 1999 Year S h ip m en ts ( in m ill io n s) Source: IDC, 2000. Figure 11. Worldwide New Linux Shipments (Client and Server) 3.2.2 Market

  1. Science Opportunity Analyzer (SOA): Science Planning Made Simple

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Polanskey, Carol A.

    2004-01-01

    .For the first time at JPL, the Cassini mission to Saturn is using distributed science operations for developing their experiments. Remote scientists needed the ability to: a) Identify observation opportunities; b) Create accurate, detailed designs for their observations; c) Verify that their designs meet their objectives; d) Check their observations against project flight rules and constraints; e) Communicate their observations to other scientists. Many existing tools provide one or more of these functions, but Science Opportunity Analyzer (SOA) has been built to unify these tasks into a single application. Accurate: Utilizes JPL Navigation and Ancillary Information Facility (NAIF) SPICE* software tool kit - Provides high fidelity modeling. - Facilitates rapid adaptation to other flight projects. Portable: Available in Unix, Windows and Linux. Adaptable: Designed to be a multi-mission tool so it can be readily adapted to other flight projects. Implemented in Java, Java 3D and other innovative technologies. Conclusion: SOA is easy to use. It only requires 6 simple steps. SOA's ability to show the same accurate information in multiple ways (multiple visualization formats, data plots, listings and file output) is essential to meet the needs of a diverse, distributed science operations environment.

  2. A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium

    ERIC Educational Resources Information Center

    Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.

    2016-01-01

    In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…

  3. LINUX, Virtualization, and the Cloud: A Hands-On Student Introductory Lab

    ERIC Educational Resources Information Center

    Serapiglia, Anthony

    2013-01-01

    Many students are entering Computer Science education with limited exposure to operating systems and applications other than those produced by Apple or Microsoft. This gap in familiarity with the Open Source community can quickly be bridged with a simple exercise that can also be used to strengthen two other important current computing concepts,…

  4. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  5. Estimating aquifer transmissivity from specific capacity using MATLAB.

    PubMed

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  6. Sptrace

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Sptrace is a general-purpose space utilization tracing system that is conceptually similar to the commercial Purify product used to detect leaks and other memory usage errors. It is designed to monitor space utilization in any sort of heap, i.e., a region of data storage on some device (nominally memory; possibly shared and possibly persistent) with a flat address space. This software can trace usage of shared and/or non-volatile storage in addition to private RAM (random access memory). Sptrace is implemented as a set of C function calls that are invoked from within the software that is being examined. The function calls fall into two broad classes: (1) functions that are embedded within the heap management software [e.g., JPL's SDR (Simple Data Recorder) and PSM (Personal Space Management) systems] to enable heap usage analysis by populating a virtual time-sequenced log of usage activity, and (2) reporting functions that are embedded within the application program whose behavior is suspect. For ease of use, these functions may be wrapped privately inside public functions offered by the heap management software. Sptrace can be used for VxWorks or RTEMS realtime systems as easily as for Linux or OS/X systems.

  7. FTAP: a Linux-based program for tapping and music experiments.

    PubMed

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  8. Linux VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with

  9. Mass Storage System - Gyrfalcon | High-Performance Computing | NREL

    Science.gov Websites

    . At the command line of one of Peregrine's login nodes, enter one of the following commands to copy directory.tgz /mss/ Option 3: The rsync command compares one directory to another and makes > Option 4: The simple Linux cp command can be used to copy a file from one directory to another

  10. Cloud prediction of protein structure and function with PredictProtein for Debian.

    PubMed

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Staniewski, Cedric; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome.

  11. Cloud Prediction of Protein Structure and Function with PredictProtein for Debian

    PubMed Central

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome. PMID:23971032

  12. Introduction to LINUX OS for new LINUX users - Basic Information Before Using The Kurucz Codes Under LINUX-.

    NASA Astrophysics Data System (ADS)

    Çay, M. Taşkin

    Recently the ATLAS suite (Kurucz) was ported to LINUX OS (Sbordone et al.). Those users of the suite unfamiliar with LINUX need to know some basic information to use these versions. This paper is a quick overview and introduction to LINUX OS. The reader is highly encouraged to own a book on LINUX OS for comprehensive use. Although the subjects and examples in this paper are for general use, they to help with the installation and running the ATLAS suite.

  13. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  14. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  15. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  16. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.

  17. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  18. Efficient Comparison between Windows and Linux Platform Applicable in a Virtual Architectural Walkthrough Application

    NASA Astrophysics Data System (ADS)

    Thubaasini, P.; Rusnida, R.; Rohani, S. M.

    This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.

  19. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    NASA Astrophysics Data System (ADS)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  20. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    PubMed

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  1. A Scalable and Dynamic Testbed for Conducting Penetration-Test Training in a Laboratory Environment

    DTIC Science & Technology

    2015-03-01

    entry point through which to execute a payload to accomplish a higher-level goal: executing arbitrary code, escalating privileges , pivoting...Mobile Ad Hoc Network Emulator (EMANE)26 can emulate the entire network stack (physical to application -layer protocols). 2. Methodology To build a...to host Windows, Linux, MacOS, Android , and other operating systems without much effort. 4 E. A simple and automatic “restore” function: Many

  2. Evaluating and Mitigating Software Supply Chain Security Risks

    DTIC Science & Technology

    2010-05-01

    is returned for all employees. The risk associated with this defect is high; variants of it have been used in attacks that caused credit card data to...via Common Access Card (CAC) readers). 29 | CMU/SEI-2010-TN-016 4.2.1 Supplier Capability Because the project is already in development, the...ClearCase, Windows, Linux and Solaris, network administration, TCP/IP, X/Motif, DII COE, Simple Net- work Management Protocol (SNMP), Agent Technology

  3. TICK: Transparent Incremental Checkpointing at Kernel Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  4. Potential performance bottleneck in Linux TCP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wenji; Crawford, Matt; /Fermilab

    2006-12-01

    TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system tomore » its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.« less

  5. Preparing a scientific manuscript in Linux: Today's possibilities and limitations

    PubMed Central

    2011-01-01

    Background Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Findings Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux. PMID:22018246

  6. BSD Portals for LINUX 2.0

    NASA Technical Reports Server (NTRS)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  7. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.

  8. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  9. Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy

    PubMed Central

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-01-01

    Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313

  10. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  11. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  12. Wrapping up BLAST and other applications for use on Unix clusters.

    PubMed

    Hokamp, Karsten; Shields, Denis C; Wolfe, Kenneth H; Caffrey, Daniel R

    2003-02-12

    We have developed two programs that speed up common bioinformatic applications by spreading them across a UNIX cluster.(1) BLAST.pm, a new module for the 'MOLLUSC' package. (2) WRAPID, a simple tool for parallelizing large numbers of small instances of programs such as BLAST, FASTA and CLUSTALW. The packages were developed in Perl on a 20-node Linux cluster and are provided together with a configuration script and documentation. They can be freely downloaded from http://wolfe.gen.tcd.ie/wrapper.

  13. CHROMA: consensus-based colouring of multiple alignments for publication.

    PubMed

    Goodstadt, L; Ponting, C P

    2001-09-01

    CHROMA annotates multiple protein sequence alignments by consensus to produce formatted and coloured text suitable for incorporation into other documents for publication. The package is designed to be flexible and reliable, and has a simple-to-use graphical user interface running under Microsoft Windows. Both the executables and source code for CHROMA running under Windows and Linux (portable command-line only) are freely available at http://www.lg.ndirect.co.uk/chroma. Software enquiries should be directed to CHROMA@lg.ndirect.co.uk.

  14. Integrating a Trusted Computing Base Extension Server and Secure Session Server into the LINUX Operating System

    DTIC Science & Technology

    2001-09-01

    Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the

  15. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozana, Stepan, E-mail: stepan.ozana@vsb.cz; Pies, Martin, E-mail: martin.pies@vsb.cz; Docekal, Tomas, E-mail: docekalt@email.cz

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a widemore » variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.« less

  16. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  17. Tuning Linux to meet real time requirements

    NASA Astrophysics Data System (ADS)

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  18. Linux thin-client conversion in a large cardiology practice: initial experience.

    PubMed

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  19. Development and Testing of a High-Speed Real-Time Kinematic Precise DGPS Positioning System Between Two Aircraft

    DTIC Science & Technology

    2006-09-01

    work-horse for this thesis. He spent hours writing some of the more tedious code, and as much time helping me learn C++ and Linux . He was always there...compared with C++, and the need to use Linux as the operating system, the filter was coded using C++ and KDevelop [28] in SUSE LINUX Professional 9.2 [42...The driving factor for using Linux was the operating system’s ability to access the serial ports in a reliable fashion. Under the original MATLAB® and

  20. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    PubMed

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  1. Next-Generation A/D Sampler ADS3000+ for VLBI2010

    NASA Technical Reports Server (NTRS)

    Takefuji, Kazuhiro; Takeuchi, Hiroshi; Tsutsumi, Masanori; Koyama, Yasuhiro

    2010-01-01

    A high-speed A/D sampler, called ADS3000+, has been developed in 2008, which can sample one analog signal up to 4 Gbps to versatile Linux PC. After A/D conversion, the ADS3000+ can perform digital signal processing such as real-time DBBC (Digital Base Band Conversion) and FIR filtering such as simple CW RFI filtering using the installed FPGAs. A 4 Gsps fringe test with the ADS3000+ has been successfully performed. The ADS3000+ will not exclusively be used for VLBI but will also be employed in other applications.

  2. 77 FR 5864 - BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... current and accurate information concerning the securities of BluePoint Linux Software Corp. because it...

  3. Kernel-based Linux emulation for Plan 9.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less

  4. Open discovery: An integrated live Linux platform of Bioinformatics tools

    PubMed Central

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235

  5. The Research on Linux Memory Forensics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  6. PKPass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, Ryan M.

    Password management solutions exist, but few are designed for enterprise systems administrators sharing oncall rotations. Due to the Multi-Factor Level of Assurance 4 effort, DOE is now distributing PIV cards with cryptographically signed certificate and private key pairs to administrators and other security-significant users. We utilize this public key infrastructure (PKI) to encrypt passwords for other recipients in a secure way. This is cross platform (works on OSX and Linux systems), and has already been adopted internally by the NCCS systems administration staff to replace their old password book system.

  7. FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi.

    PubMed

    Sapes, Jordi; Solsona, Francesc

    2016-02-06

    Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux.

  8. Open Technologies at Athabasca University's Geospace Observatories

    NASA Astrophysics Data System (ADS)

    Connors, M. G.; Schofield, I. S.

    2012-12-01

    Athabasca University Geophysical Observatories feature two auroral observation sites situated in the subauroral zone of western Canada, separated by approximately 25 km. These sites are both on high-speed internet and ideal for observing phenomena detectable from this latitude, which include noctilucent clouds, meteors, and magnetic and optical aspects of the aurora. General aspects of use of Linux in observatory management are described, with emphasis on recent imaging projects involving control of high resolution digital SLR cameras at low cadence, and inexpensive white light analog video cameras at 30 Hz. Linux shell scripts are extensively used, with image capture controlled by gphoto2, the ivtv-utils package, x264 video coding library, and ffmpeg. Imagemagick allows processing of images in an automated fashion. Image archives and movies are created and can be correlated with magnetic data. Much of the magnetic data stream also uses GMT (Generic Mapping Tools) within shell scripts for display. Additionally, SPASE metadata are generated for most of the magnetic data, thus allowing users of our AUTUMN magnetic data repository to perform SPASE queries on the dataset. Visualization products from our twin observatories will be presented.

  9. Missed deadline notification in best-effort schedulers

    NASA Astrophysics Data System (ADS)

    Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.

    2003-12-01

    It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.

  10. Simplified Virtualization in a HEP/NP Environment with Condor

    NASA Astrophysics Data System (ADS)

    Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.

    2012-12-01

    In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.

  11. The Longhorn Array Database (LAD): An Open-Source, MIAME compliant implementation of the Stanford Microarray Database (SMD)

    PubMed Central

    Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R

    2003-01-01

    Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545

  12. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  13. Software architecture of the Magdalena Ridge Observatory Interferometer

    NASA Astrophysics Data System (ADS)

    Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John

    2010-07-01

    Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.

  14. Source Code Analysis Laboratory (SCALe)

    DTIC Science & Technology

    2012-04-01

    Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A

  15. FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi

    PubMed Central

    Sapes, Jordi; Solsona, Francesc

    2016-01-01

    Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux. PMID:26861340

  16. Improving Block-level Efficiency with scsi-mq

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldwell, Blake A

    2015-01-01

    Current generation solid-state storage devices are exposing a new bottlenecks in the SCSI and block layers of the Linux kernel, where IO throughput is limited by lock contention, inefficient interrupt handling, and poor memory locality. To address these limitations, the Linux kernel block layer underwent a major rewrite with the blk-mq project to move from a single request queue to a multi-queue model. The Linux SCSI subsystem rework to make use of this new model, known as scsi-mq, has been merged into the Linux kernel and work is underway for dm-multipath support in the upcoming Linux 4.0 kernel. These piecesmore » were necessary to make use of the multi-queue block layer in a Lustre parallel filesystem with high availability requirements. We undertook adding support of the 3.18 kernel to Lustre with scsi-mq and dm-multipath patches to evaluate the potential of these efficiency improvements. In this paper we evaluate the block-level performance of scsi-mq with backing storage hardware representative of a HPC-targerted Lustre filesystem. Our findings show that SCSI write request latency is reduced by as much as 13.6%. Additionally, when profiling the CPU usage of our prototype Lustre filesystem, we found that CPU idle time increased by a factor of 7 with Linux 3.18 and blk-mq as compared to a standard 2.6.32 Linux kernel. Our findings demonstrate increased efficiency of the multi-queue block layer even with disk-based caching storage arrays used in existing parallel filesystems.« less

  17. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  18. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  19. Elan4/SPARC V9 Cross Loader and Dynamic Linker

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    anf Fabien Lebaillif-Delamare, Fabrizio Petrini

    2004-10-25

    The Elan4/Sparc V9 Croos Loader and Liner is a part of the Linux system software that allows the dynamic loading and linking of user code in the network interface Quadrics QsNETII, also called as Elan4 Quadrics. Elan44 uses a thread processor that is based on the assembly instruction set of the Sparc V9. All this software is integrated as a Linux kernel module in the Linux 2.6.5 release.

  20. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  1. Project Management Software for Distributed Industrial Companies

    NASA Astrophysics Data System (ADS)

    Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.

    This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.

  2. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  3. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  4. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  5. LK Scripting Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The LK scripting language is a simple and fast computer programming language designed for easy integration with existing software to enable automation of tasks. The LK language is used by NREL’s System Advisor Model (SAM), the SAM Software Development Kit (SDK), and SolTrace products. LK is easy extensible and adaptable to new software due to its small footprint and is designed to be statically linked into other software. It is written in standard C++, is cross-platform (Windows, Linux, and OSX), and includes optional portions that enable direct integration with graphical user interfaces written in the open source C++ wxWidgets Versionmore » 3.0+ toolkit.« less

  6. HEP Computing

    Science.gov Websites

    Computing Visitors who do not need a HEP linux account Visitors with laptops can use wireless network HEP linux account Step 1: Click Here for New Account Application After submitting the application, you

  7. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  8. End-To-End performance test of the LINC-NIRVANA Wavefront-Sensor system.

    NASA Astrophysics Data System (ADS)

    Berwein, Juergen; Bertram, Thomas; Conrad, Al; Briegel, Florian; Kittmann, Frank; Zhang, Xiangyu; Mohr, Lars

    2011-09-01

    LINC-NIRVANA is an imaging Fizeau interferometer, for use in near infrared wavelengths, being built for the Large Binocular Telescope. Multi-conjugate adaptive optics (MCAO) increases the sky coverage and the field of view over which diffraction limited images can be obtained. For its MCAO implementation, Linc-Nirvana utilizes four total wavefront sensors; each of the two beams is corrected by both a ground-layer wavefront sensor (GWS) and a high-layer wavefront sensor (HWS). The GWS controls the adaptive secondary deformable mirror (DM), which is based on an DSP slope computing unit. Whereas the HWS controls an internal DM via computations provided by an off-the-shelf multi-core Linux system. Using wavefront sensor data collected from a prior lab experiment, we have shown via simulation that the Linux based system is sufficient to operate at 1kHz, with jitter well below the needs of the final system. Based on that setup we tested the end-to-end performance and latency through all parts of the system which includes the camera, the wavefront controller, and the deformable mirror. We will present our loop control structure and the results of those performance tests.

  9. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul D.

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  10. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul David

    2015-10-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  11. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  12. A General Purpose High Performance Linux Installation Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less

  13. [Study for lung sound acquisition module based on ARM and Linux].

    PubMed

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  14. Interfacing the Controllogics PLC over Ethernet/IP.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasemir, K. U.; Dalesio, L. R.

    2001-01-01

    The Allen-Bradley ControlLogix [1] line of programmable logic controllers (PLCs) offers several interfaces: Ethernet, ControlNet, DeviceNet, RS-232 and others. The ControlLogix Ethernet interface module 1756-ENET uses EtherNet/IP, the ControlNet protocol [2], encapsulated in Ethernet packages, with specific service codes [3]. A driver for the Experimental Physics and Industrial Control System (EPICS) has been developed that utilizes this EtherNet/IP protocol for controllers running the vxWorks RTOS as well as a Win32 and Unix/Linux test program. Features, performance and limitations of this interface are presented.

  15. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguckin, Theodore

    2008-10-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machinemore » was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period (including a discussion of« less

  16. Real Time Linux - The RTOS for Astronomy?

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.

  17. Gener: a minimal programming module for chemical controllers based on DNA strand displacement

    PubMed Central

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-01-01

    Summary: Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research’s DSD tool as well as to LaTeX. Availability and implementation: Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. Contact: ozan@cosbi.eu PMID:25957353

  18. Gener: a minimal programming module for chemical controllers based on DNA strand displacement.

    PubMed

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-09-01

    : Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research's DSD tool as well as to LaTeX. Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. ozan@cosbi.eu. © The Author 2015. Published by Oxford University Press.

  19. Linux Incident Response Volatile Data Analysis Framework

    ERIC Educational Resources Information Center

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  20. Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters

    DTIC Science & Technology

    2015-01-01

    onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F

  1. Implementation of image transmission server system using embedded Linux

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee

    2005-12-01

    In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.

  2. Linux Adventures on a Laptop. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  3. Drowning in PC Management: Could a Linux Solution Save Us?

    ERIC Educational Resources Information Center

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  4. 78 FR 57648 - Notice of Issuance of Final Determination Concerning Video Teleconferencing Server

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... the Chinese- origin Video Board and the Filter Board, impart the essential character to the video... includes the codec; a network filter electronic circuit board (``Filter Board''); a housing case; a power... (``Linux software''). The Linux software allows the Filter Board to inspect each Ethernet packet of...

  5. Chicks in Charge: Andrea Baker & Amy Daniels--Airport High School Media Center, Columbia, SC

    ERIC Educational Resources Information Center

    Library Journal, 2004

    2004-01-01

    This article briefly discusses two librarians exploration of Linux. Andrea Baker and Amy Daniels were tired of telling their students that new technology items were not in the budget. They explored Linux, which is a program that recycles older computers, installs free operating systems and free software.

  6. Diversifying the Department of Defense Network Enterprise with Linux

    DTIC Science & Technology

    2010-03-01

    Cyberspace, Cyberwar, Legacy, Inventory, Acquisition, Competitive Advantage, Coalition Communications, Ubiquitous, Strategic, Centricity, Kaizen , ISO... Kaizen , ISO, Outsource CLASSIFICATION: Unclassified Historically, the United States and its closest allies have grown increasingly reliant...control through the use of continuous improvement processes ( Kaizen )34. In choosing the Linux client operating system, the move encourages open standards

  7. Managing a Real-Time Embedded Linux Platform with Buildroot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamond, J.; Martin, K.

    2015-01-01

    Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilabmore » accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations« less

  8. SPOCS: software for predicting and visualizing orthology/paralogy relationships among genomes.

    PubMed

    Curtis, Darren S; Phillips, Aaron R; Callister, Stephen J; Conlan, Sean; McCue, Lee Ann

    2013-10-15

    At the rate that prokaryotic genomes can now be generated, comparative genomics studies require a flexible method for quickly and accurately predicting orthologs among the rapidly changing set of genomes available. SPOCS implements a graph-based ortholog prediction method to generate a simple tab-delimited table of orthologs and in addition, html files that provide a visualization of the predicted ortholog/paralog relationships to which gene/protein expression metadata may be overlaid. A SPOCS web application is freely available at http://cbb.pnnl.gov/portal/tools/spocs.html. Source code for Linux systems is also freely available under an open source license at http://cbb.pnnl.gov/portal/software/spocs.html; the Boost C++ libraries and BLAST are required.

  9. NREL's OpenStudio Helps Design More Efficient Buildings (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-07-01

    The National Renewable Energy Laboratory (NREL) has created the OpenStudio software platform that makes it easier for architects and engineers to evaluate building energy efficiency measures throughout the design process. OpenStudio makes energy modeling more accessible and affordable, helping professionals to design structures with lower utility bills and less carbon emissions, resulting in a healthier environment. OpenStudio includes a user-friendly application suite that makes the U.S. Department of Energy's EnergyPlus and Radiance simulation engines easier to use for whole building energy and daylighting performance analysis. OpenStudio is freely available and runs on Windows, Mac, and Linux operating systems.

  10. NSTX-U Control System Upgrades

    DOE PAGES

    Erickson, K. G.; Gates, D. A.; Gerhardt, S. P.; ...

    2014-06-01

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forwardmore » port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.« less

  11. Malware Memory Analysis of the Jynx2 Linux Rootkit (Part 1): Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2014-10-01

    indication that not a single scanner was able to detect the rootkit as malicious or infected. SHA256 ...clear indication that not a single scanner was able detect it as malicious, infected or associated to the Jynx2 rootkit. SHA256

  12. Teaching Hands-On Linux Host Computer Security

    ERIC Educational Resources Information Center

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  13. Will Your Next Supercomputer Come from Costco?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less

  14. Development of a platform-independent receiver control system for SISIFOS

    NASA Astrophysics Data System (ADS)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  15. A Real-Time Linux for Multicore Platforms

    DTIC Science & Technology

    2013-12-20

    under ARO support) to obtain a fully-functional OS for supporting real-time workloads on multicore platforms. This system, called LITMUS -RT...to be specified as plugin components. LITMUS -RT is open-source software (available at The views, opinions and/or findings contained in this report... LITMUS -RT (LInux Testbed for MUltiprocessor Scheduling in Real-Time systems), allows different multiprocessor real-time scheduling and

  16. Linux OS Jitter Measurements at Large Node Counts using a BlueGene/L

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R; Tauferner, Mr. Andrew; Inglett, Mr. Todd

    2010-01-01

    We present experimental results for a coordinated scheduling implementation of the Linux operating system. Results were collected on an IBM Blue Gene/L machine at scales up to 16K nodes. Our results indicate coordinated scheduling was able to provide a dramatic improvement in scaling performance for two applications characterized as bulk synchronous parallel programs.

  17. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    DTIC Science & Technology

    2013-09-01

    solutions to virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a...virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a specific version of...Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302

  18. Interactivity vs. fairness in networked linux systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wenji; Crawford, Matt; /Fermilab

    In general, the Linux 2.6 scheduler can ensure fairness and provide excellent interactive performance at the same time. However, our experiments and mathematical analysis have shown that the current Linux interactivity mechanism tends to incorrectly categorize non-interactive network applications as interactive, which can lead to serious fairness or starvation issues. In the extreme, a single process can unjustifiably obtain up to 95% of the CPU! The root cause is due to the facts that: (1) network packets arrive at the receiver independently and discretely, and the 'relatively fast' non-interactive network process might frequently sleep to wait for packet arrival. Thoughmore » each sleep lasts for a very short period of time, the wait-for-packet sleeps occur so frequently that they lead to interactive status for the process. (2) The current Linux interactivity mechanism provides the possibility that a non-interactive network process could receive a high CPU share, and at the same time be incorrectly categorized as 'interactive.' In this paper, we propose and test a possible solution to address the interactivity vs. fairness problems. Experiment results have proved the effectiveness of the proposed solution.« less

  19. The instrument control software package for the Habitable-Zone Planet Finder spectrometer

    NASA Astrophysics Data System (ADS)

    Bender, Chad F.; Robertson, Paul; Stefansson, Gudmundur Kari; Monson, Andrew; Anderson, Tyler; Halverson, Samuel; Hearty, Frederick; Levi, Eric; Mahadevan, Suvrath; Nelson, Matthew; Ramsey, Larry; Roy, Arpita; Schwab, Christian; Shetrone, Matthew; Terrien, Ryan

    2016-08-01

    We describe the Instrument Control Software (ICS) package that we have built for The Habitable-Zone Planet Finder (HPF) spectrometer. The ICS controls and monitors instrument subsystems, facilitates communication with the Hobby-Eberly Telescope facility, and provides user interfaces for observers and telescope operators. The backend is built around the asynchronous network software stack provided by the Python Twisted engine, and is linked to a suite of custom hardware communication protocols. This backend is accessed through Python-based command-line and PyQt graphical frontends. In this paper we describe several of the customized subsystem communication protocols that provide access to and help maintain the hardware systems that comprise HPF, and show how asynchronous communication benefits the numerous hardware components. We also discuss our Detector Control Subsystem, built as a set of custom Python wrappers around a C-library that provides native Linux access to the SIDECAR ASIC and Hawaii-2RG detector system used by HPF. HPF will be one of the first astronomical instruments on sky to utilize this native Linux capability through the SIDECAR Acquisition Module (SAM) electronics. The ICS we have created is very flexible, and we are adapting it for NEID, NASA's Extreme Precision Doppler Spectrometer for the WIYN telescope; we will describe this adaptation, and describe the potential for use in other astronomical instruments.

  20. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  1. New automatic mode of visualizing the colon via Cine CT

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.

    2001-05-01

    Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.

  2. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  3. CommServer: A Communications Manager For Remote Data Sites

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D. L.

    2012-12-01

    CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.

  4. Remote secure observing for the Faulkes Telescopes

    NASA Astrophysics Data System (ADS)

    Smith, Robert J.; Steele, Iain A.; Marchant, Jonathan M.; Fraser, Stephen N.; Mucke-Herzberg, Dorothea

    2004-09-01

    Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.

  5. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  6. Linux containers for fun and profit in HPC

    DOE PAGES

    Priedhorsky, Reid; Randles, Timothy C.

    2017-10-01

    This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.

  7. Linux containers for fun and profit in HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Timothy C.

    This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.

  8. VizieR Online Data Catalog: RefleX : X-ray-tracing code (Paltani+, 2017)

    NASA Astrophysics Data System (ADS)

    Paltani, S.; Ricci, C.

    2017-11-01

    We provide here the RefleX executable, for both Linux and MacOSX, together with the User Manual and example script file and output file Running (for instance): reflex_linux will produce the file reflex.out Note that the results may differ slightly depending on the OS, because of slight differences in some implementations numerical computations. The difference are scientifically meaningless. (5 data files).

  9. Adaptive Multilevel Middleware for Object Systems

    DTIC Science & Technology

    2006-12-01

    the system at the system-call level or using the CORBA-standard Extensible Transport Framework ( ETF ). Transparent insertion is highly desirable from an...often as it needs to. This is remedied by using the real-time scheduling class in a stock Linux kernel. We used schedsetscheduler system call (with...real-time scheduling class (SCHEDFIFO) for all the ML-NFD programs, later experiments with CPU load indicate that a stock Linux kernel is not

  10. NAVO MSRC Navigator. Fall 2006

    DTIC Science & Technology

    2006-01-01

    UNIX Manual Pages: xdm (1x). 7. Buddenhagen, Oswald, “The KDM Handbook,” KDE Documentation, http://docs.kde.org/development/ en /kdebase/kdm/. 8... Linux Opteron cluster was recently determined through a series of simulations that employed both fixed and adaptive meshes. The fixed-mesh scalability...approximately eight in the total number of cells in the 3-D simulation. The fixed-mesh and AMR scalability results on the Linux Opteron cluster are

  11. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  12. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  13. Computing and combustion

    NASA Technical Reports Server (NTRS)

    Thompson, Daniel

    2004-01-01

    Coming into the Combustion Branch of the Turbomachinery and Propulsion Systems Division, there was not any set project planned out for me to work on. This was understandable, considering I am only at my sophmore year in college. Also, my mentor was a division chief and it was expected that I would be passed down the line. It took about a week for me to be placed with somebody who could use me. My first project was to write a macro for TecPlot. Commonly, a person would have a 3D contour volume modeling something such as a combustion engine. This 3D volume needed to have slices extracted from it and made into 2D scientific plots with all of the appropriate axis and titles. This was very tedious to do by hand. My macro needed to automate the process. There was some education I needed before I could start, however. First, TecPlot ran on Unix and Linux, like a growing majority of scientific applications. I knew a little about Linux, but I would need to know more to use the software at hand. I took two classes at the Learning Center on Unix and am now comfortable with Linux and Unix. I already had taken Computer Science I and II, and had undergone the transformation from Computer Programmer to Procedural Epistemologist. I knew how to design efficient algorithms, I just needed to learn the macro language. After a little less than a week, I had learned the basics of the language. Like most languages, the best way to learn more of it was by using it. It was decided that it was best that I do the macro in layers, starting simple and adding features as I went. The macro started out slicing with respect to only one axis, and did not make 2D plots out of the slices. Instead, it lined them up inside the solid. Next, I allowed for more than one axis and placed each slice in a separate frame. After this, I added code that transformed each individual slice-frame into a scientific plot. I also made frames for composite volumes, which showed all of the slices in the same XYZ space. I then designed an addition companion macro that exported each frame into its own image file. I then distributed the macros to a test group, and am awaiting feedback. In the meantime, a am researching the possible applications of distributed computing on the National Combustor Code. Many of our Linux boxes were idle for most of the day. The department thinks that it would be wonderful if we could get all of these idle processors to work on a problem under the NCC code. The client software would have to be easily distributed, such as in screensaver format or as a program that only ran when the computer was not in use. This project proves to be an interesting challenge.

  14. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    DTIC Science & Technology

    2010-12-01

    the software for reevaluation. Once the ree- valuation process is completed, CERT provides the client a report detailing the software’s con - formance...Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8% Wine...inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with energy system software will help

  15. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  16. Covert Android Rootkit Detection: Evaluating Linux Kernel Level Rootkits on the Android Operating System

    DTIC Science & Technology

    2012-06-14

    the attacker . Thus, this race condition causes a privilege escalation . 2.2.5 Summary This section reviewed software exploitation of a Linux kernel...has led to increased targeting by malware writers. Android attacks have naturally sparked interest in researching protections for Android . This...release, Android 4.0 Ice Cream Sandwich. These rootkits focused on covert techniques to hide the presence of data used by an attacker to infect a

  17. The Ubuntu Chat Corpus for Multiparticipant Chat Analysis

    DTIC Science & Technology

    2013-03-01

    Intelligence (www.aaai.org). All rights reserved. the # LINUX corpus (Elsner and Charniak 2010), and the #IPHONE/#PHYSICS/#PYTHON corpus (Adams 2008). For many...made publicly available, making it difficult to comparatively evaluate dif- ferent techniques. Corpus Description Ubuntu, a Linux -based operating...Kubuntu (Ubuntu with KDE ) support #ubuntu-devel 2 112 074 12 140 53.7 2004-10-01 Developmental team coordination #ubuntu+1 1 621 680 26 805 52.6 2007-04-04

  18. MVC for Content Management on the Cloud

    DTIC Science & Technology

    2011-09-01

    Windows, Linux , MacOS, PalmOS and other customized ones (Qiu). Figure 20 illustrates implementation of MVC architecture. Qiu examines a “universal...Listing of Unzipped Text Document (From O’Reilly & Associates, Inc, 2005) Figure 37 shows the results of unzipping this file in Linux . The contents of the...ODF Adoption TC, and the ODF Alliance include members from Adobe, BBC, Bristol City Council, Bull, Corel, EDS, EMC, GNOME, IBM, Intel, KDE , MySQL

  19. [Making a low cost IPSec router on Linux and the assessment for practical use].

    PubMed

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.

  20. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  1. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayan Ghosh, Jeff Hammond

    OpenSHMEM is a community effort to unifyt and standardize the SHMEM programming model. MPI (Message Passing Interface) is a well-known community standard for parallel programming using distributed memory. The most recen t release of MPI, version 3.0, was designed in part to support programming models like SHMEM.OSHMPI is an implementation of the OpenSHMEM standard using MPI-3 for the Linux operating system. It is the first implementation of SHMEM over MPI one-sided communication and has the potential to be widely adopted due to the portability and widely availability of Linux and MPI-3. OSHMPI has been tested on a variety of systemsmore » and implementations of MPI-3, includingInfiniBand clusters using MVAPICH2 and SGI shared-memory supercomputers using MPICH. Current support is limited to Linux but may be extended to Apple OSX if there is sufficient interest. The code is opensource via https://github.com/jeffhammond/oshmpi« less

  3. Timing characterization and analysis of the Linux-based, closed loop control computer for the Subaru Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Dinkins, Matthew; Colley, Stephen

    2008-07-01

    Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.

  4. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  5. CFDP for Interplanetary Overlay Network

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol for Interplanetary Overlay Network (CFDP-ION) is an implementation of CFDP that uses IO' s DTN (delay tolerant networking) implementation as its UT (unit-data transfer) layer. Because the DTN protocols effect automatic, reliable transmission via multiple relays, CFDP-ION need only satisfy the requirements for Class 1 ("unacknowledged") CFDP. This keeps the implementation small, but without loss of capability. This innovation minimizes processing resources by using zero-copy objects for file data transmission. It runs without modification in VxWorks, Linux, Solaris, and OS/X. As such, this innovation can be used without modification in both flight and ground systems. Integration with DTN enables the CFDP implementation itself to be very simple; therefore, very small. Use of ION infrastructure minimizes consumption of storage and processing resources while maximizing safety.

  6. birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.

    PubMed

    Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir

    2011-05-01

    birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.

  7. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Timothy C.

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining accessmore » to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.« less

  8. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  9. Developing and Benchmarking Native Linux Applications on Android

    NASA Astrophysics Data System (ADS)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  10. System Data Model (SDM) Source Code

    DTIC Science & Technology

    2012-08-23

    CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH

  11. The database design of LAMOST based on MYSQL/LINUX

    NASA Astrophysics Data System (ADS)

    Li, Hui-Xian, Sang, Jian; Wang, Sha; Luo, A.-Li

    2006-03-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) will be set up in the coming years. A fully automated software system for reducing and analyzing the spectra has to be developed with the telescope. This database system is an important part of the software system. The requirements for the database of the LAMOST, the design of the LAMOST database system based on MYSQL/LINUX and performance tests of this system are described in this paper.

  12. Navigation/Prop Software Suite

    NASA Technical Reports Server (NTRS)

    Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn

    2012-01-01

    Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.

  13. Malware Memory Analysis of the IVYL Linux Rootkit: Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2015-04-01

    report is to examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills ...The skills amassed by incident handlers and investigators alike while using Volatility to examine Windows memory images will be of some help...bin/pulseaudio --start --log-target=syslog 1362 1000 1000 nautilus 1366 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1370 1000 1000 nm-applet

  14. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  15. Porting and refurbishment of the WSS TNG control software

    NASA Astrophysics Data System (ADS)

    Caproni, Alessandro; Zacchei, Andrea; Vuerli, Claudio; Pucillo, Mauro

    2004-09-01

    The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies, like multi-threading, and the possibility to develop high level WSS applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.

  16. Geena 2, improved automated analysis of MALDI/TOF mass spectra.

    PubMed

    Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo

    2016-03-02

    Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of breast cancer mortality following breast cancer surgery, whose results were validated by ELISA, a completely alternative method. Geena 2 is a public tool for the automated pre-processing of MS data originated by MALDI/TOF instruments, with a simple and intuitive web interface. It is now under active development for the inclusion of further filtering options and for the adoption of standard formats for MS spectra.

  17. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  18. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  19. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  20. Iris: Constructing and Analyzing Spectral Energy Distributions with the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Laurino, O.; Budynkiewicz, J.; Busko, I.; Cresitello-Dittmar, M.; D'Abrusco, R.; Doe, S.; Evans, J.; Pevunova, O.

    2014-05-01

    We present Iris 2.0, the latest release of the Virtual Astronomical Observatory application for building and analyzing Spectral Energy Distributions (SEDs). With Iris, users may read in and display SEDs inspect and edit any selection of SED data, fit models to SEDs in arbitrary spectral ranges, and calculate confidence limits on best-fit parameters. SED data may be loaded into the application from VOTable and FITS files compliant with the International Virtual Observatoy Alliance interoperable data models, or retrieved directly from NED or the Italian Space Agency Science Data Center; data in non-standard formats may also be converted within the application. Users may seamlessy exchange data between Iris and other Virtual Observatoy tools using the Simple Application Messaging Protocol. Iris 2.0 also provides a tool for redshifting, interpolating, and measuring integratd fluxes, and allows simple aperture corrections for individual points and SED segments. Custom Python functions, template models and template libraries may be imported into Iris for fitting SEDs. Iris may be extended through Java plugins; users can install third-party packages, or develop their own plugin using Iris' Software Development Kit. Iris 2.0 is available for Linux and Mac OS X systems.

  1. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    PubMed

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  2. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  3. Development of a Low-Latency, High Data Rate, Differential GPS Relative Positioning System for UAV Formation Flight Control

    DTIC Science & Technology

    2006-09-01

    spiral development cycle involved transporting the software processes from a Windows XP / MATLAB environment to a Linux / C++ environment. This...tested on. Additionally, in the case of the GUMSTIX PC boards, the LINUX operating system is burned into the read-only memory. Lastly, both PC-104 and...both the real-time environment and the post-processed en - vironment. When the system operates in real-time mode, an output file is generated which

  4. MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Timossi, Chris

    2006-10-19

    Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.

  5. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  6. Development of EPA Protocol Information Enquiry Service System Based on Embedded ARM Linux

    NASA Astrophysics Data System (ADS)

    Peng, Daogang; Zhang, Hao; Weng, Jiannian; Li, Hui; Xia, Fei

    Industrial Ethernet is a new technology for industrial network communications developed in recent years. In the field of industrial automation in China, EPA is the first standard accepted and published by ISO, and has been included in the fourth edition IEC61158 Fieldbus of NO.14 type. According to EPA standard, Field devices such as industrial field controller, actuator and other instruments are all able to realize communication based on the Ethernet standard. The Atmel AT91RM9200 embedded development board and open source embedded Linux are used to develop an information inquiry service system of EPA protocol based on embedded ARM Linux in this paper. The system is capable of designing an EPA Server program for EPA data acquisition procedures, the EPA information inquiry service is available for programs in local or remote host through Socket interface. The EPA client can access data and information of other EPA equipments on the EPA network when it establishes connection with the monitoring port of the server.

  7. fluff: exploratory analysis and visualization of high-throughput sequencing data

    PubMed Central

    Georgiou, Georgios

    2016-01-01

    Summary. In this article we describe fluff, a software package that allows for simple exploration, clustering and visualization of high-throughput sequencing data mapped to a reference genome. The package contains three command-line tools to generate publication-quality figures in an uncomplicated manner using sensible defaults. Genome-wide data can be aggregated, clustered and visualized in a heatmap, according to different clustering methods. This includes a predefined setting to identify dynamic clusters between different conditions or developmental stages. Alternatively, clustered data can be visualized in a bandplot. Finally, fluff includes a tool to generate genomic profiles. As command-line tools, the fluff programs can easily be integrated into standard analysis pipelines. The installation is straightforward and documentation is available at http://fluff.readthedocs.org. Availability. fluff is implemented in Python and runs on Linux. The source code is freely available for download at https://github.com/simonvh/fluff. PMID:27547532

  8. TetrUSS Capabilities for S and C Applications

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Parikh, Paresh

    2004-01-01

    TetrUSS is a suite of loosely coupled computational fluid dynamics software that is packaged into a complete flow analysis system. The system components consist of tools for geometry setup, grid generation, flow solution, visualization, and various utilities tools. Development began in 1990 and it has evolved into a proven and stable system for Euler and Navier-Stokes analysis and design of unconventional configurations. It is 1) well developed and validated, 2) has a broad base of support, and 3) is presently is a workhorse code because of the level of confidence that has been established through wide use. The entire system can now run on linux or mac architectures. In the following slides, I will highlight more of the features of the VGRID and USM3D codes.

  9. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    PubMed

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  10. BioMake: a GNU make-compatible utility for declarative workflow management.

    PubMed

    Holmes, Ian H; Mungall, Christopher J

    2017-11-01

    The Unix 'make' program is widely used in bioinformatics pipelines, but suffers from problems that limit its application to large analysis datasets. These include reliance on file modification times to determine whether a target is stale, lack of support for parallel execution on clusters, and restricted flexibility to extend the underlying logic program. We present BioMake, a make-like utility that is compatible with most features of GNU Make and adds support for popular cluster-based job-queue engines, MD5 signatures as an alternative to timestamps, and logic programming extensions in Prolog. BioMake is available for MacOSX and Linux systems from https://github.com/evoldoers/biomake under the BSD3 license. The only dependency is SWI-Prolog (version 7), available from http://www.swi-prolog.org/. ihholmes + biomake@gmail.com or cmungall + biomake@gmail.com. Feature table comparing BioMake to similar tools. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  12. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    PubMed

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.

  13. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    PubMed Central

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-01-01

    Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296

  14. Quantifying medical student clinical experiences via an ICD Code Logging App.

    PubMed

    Rawlins, Fred; Sumpter, Cameron; Sutphin, Dean; Garner, Harold R

    2018-03-01

    The logging of ICD Diagnostic, Procedure and Drug codes is one means of tracking the experience of medical students' clinical rotations. The goal is to create a web-based computer and mobile application to track the progress of trainees, monitor the effectiveness of their training locations and be a means of sampling public health status. We have developed a web-based app in which medical trainees make entries via a simple and quick interface optimized for both mobile devices and personal computers. For each patient interaction, users enter ICD diagnostic, procedure, and drug codes via a hierarchical or search entry interface, as well as patient demographics (age range and gender, but no personal identifiers), and free-text notes. Users and administrators can review and edit input via a series of output interfaces. The user interface and back-end database are provided via dual redundant failover Linux servers. Students master the interface in ten minutes, and thereafter complete entries in less than one minute. Five hundred-forty 3rd year VCOM students each averaged 100 entries in the first four week clinical rotation. Data accumulated in various Appalachian clinics and Central American medical mission trips has demonstrated the public health surveillance utility of the application. PC and mobile apps can be used to collect medical trainee experience in real time or near real-time, quickly, and efficiently. This system has collected 75,596 entries to date, less than 2% of trainees have needed assistance to become proficient, and medical school administrators are using the various summaries to evaluate students and compare different rotation sites. Copyright © 2017. Published by Elsevier B.V.

  15. Wavelet extractor: A Bayesian well-tie and wavelet extraction program

    NASA Astrophysics Data System (ADS)

    Gunning, James; Glinsky, Michael E.

    2006-06-01

    We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.

  16. DROIDS 1.20: A GUI-Based Pipeline for GPU-Accelerated Comparative Protein Dynamics.

    PubMed

    Babbitt, Gregory A; Mortensen, Jamie S; Coppola, Erin E; Adams, Lily E; Liao, Justin K

    2018-03-13

    Traditional informatics in comparative genomics work only with static representations of biomolecules (i.e., sequence and structure), thereby ignoring the molecular dynamics (MD) of proteins that define function in the cell. A comparative approach applied to MD would connect this very short timescale process, defined in femtoseconds, to one of the longest in the universe: molecular evolution measured in millions of years. Here, we leverage advances in graphics-processing-unit-accelerated MD simulation software to develop a comparative method of MD analysis and visualization that can be applied to any two homologous Protein Data Bank structures. Our open-source pipeline, DROIDS (Detecting Relative Outlier Impacts in Dynamic Simulations), works in conjunction with existing molecular modeling software to convert any Linux gaming personal computer into a "comparative computational microscope" for observing the biophysical effects of mutations and other chemical changes in proteins. DROIDS implements structural alignment and Benjamini-Hochberg-corrected Kolmogorov-Smirnov statistics to compare nanosecond-scale atom bond fluctuations on the protein backbone, color mapping the significant differences identified in protein MD with single-amino-acid resolution. DROIDS is simple to use, incorporating graphical user interface control for Amber16 MD simulations, cpptraj analysis, and the final statistical and visual representations in R graphics and UCSF Chimera. We demonstrate that DROIDS can be utilized to visually investigate molecular evolution and disease-related functional changes in MD due to genetic mutation and epigenetic modification. DROIDS can also be used to potentially investigate binding interactions of pharmaceuticals, toxins, or other biomolecules in a functional evolutionary context as well. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  18. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    DOE PAGES

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less

  19. The LINC-NIRVANA fringe and flexure tracker: Linux real-time solutions

    NASA Astrophysics Data System (ADS)

    Wang, Yeping; Bertram, Thomas; Straubmeier, Christian; Rost, Steffen; Eckart, Andreas

    2006-06-01

    The correction of atmospheric differential piston and instrumental flexure effects is mandatory for optimum interferometric performance of the LBT NIR interferometric imaging camera LINC-NIRVANA. The task of the Fringe and Flexure Tracking System (FFTS) is to detect and correct these effects in a real-time closed loop. On a timescale of milliseconds, image data of the order of 4K bytes has to be retrieved from the FFTS detector, analyzed, and the results have to be sent to the control system. The need for a reliable communication between several processes within a confined period of time calls for solutions with good real-time performance. We investigated two soft real-time options for the Linux platform. The design we present takes advantage of several features that follow the POSIX standard with improved real-time performance, which were implemented in the new Linux kernel (2.6.12). Several concepts, such as synchronization, shared memory, and preemptive scheduling are considered and the performance of the most time-critical parts of the FFTS software is tested.

  20. Testing Task Schedulers on Linux System

    NASA Astrophysics Data System (ADS)

    Jelenković, Leonardo; Groš, Stjepan; Jakobović, Domagoj

    Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).

  1. Comparative Analysis of Active and Passive Mapping Techniques in an Internet-Based Local Area Network

    DTIC Science & Technology

    2004-03-01

    PIII/500 (K) 512 A11 3C905 Honeynet PIII/1000 (C) 512 A11 3C905 Generator PIII/800 (C) 256 A11 3C905 Each system is running Debian GNU / Linux “unstable...Network,” September 2000. http://www.issues.af.mil/notams/notam00-5.html; accessed January 16, 2004. 5. “Debian GNU / Linux 3.0 Released,” Debian News...interact with those servers. 1.5 Summary The remainder of this document is organized into four chapters. Chapter 2 con - tains the literature review where

  2. Dataset for forensic analysis of B-tree file system.

    PubMed

    Wani, Mohamad Ahtisham; Bhat, Wasim Ahmad

    2018-06-01

    Since B-tree file system (Btrfs) is set to become de facto standard file system on Linux (and Linux based) operating systems, Btrfs dataset for forensic analysis is of great interest and immense value to forensic community. This article presents a novel dataset for forensic analysis of Btrfs that was collected using a proposed data-recovery procedure. The dataset identifies various generalized and common file system layouts and operations, specific node-balancing mechanisms triggered, logical addresses of various data structures, on-disk records, recovered-data as directory entries and extent data from leaf and internal nodes, and percentage of data recovered.

  3. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  4. Eternal Sunshine of the Spotless Machine: Protecting Privacy with Ephemeral Channels

    PubMed Central

    Dunn, Alan M.; Lee, Michael Z.; Jana, Suman; Kim, Sangman; Silberstein, Mark; Xu, Yuanzhong; Shmatikov, Vitaly; Witchel, Emmett

    2014-01-01

    Modern systems keep long memories. As we show in this paper, an adversary who gains access to a Linux system, even one that implements secure deallocation, can recover the contents of applications’ windows, audio buffers, and data remaining in device drivers—long after the applications have terminated. We design and implement Lacuna, a system that allows users to run programs in “private sessions.” After the session is over, all memories of its execution are erased. The key abstraction in Lacuna is an ephemeral channel, which allows the protected program to talk to peripheral devices while making it possible to delete the memories of this communication from the host. Lacuna can run unmodified applications that use graphics, sound, USB input devices, and the network, with only 20 percentage points of additional CPU utilization. PMID:24755709

  5. Simplified Deployment of Health Informatics Applications by Providing Docker Images.

    PubMed

    Löbe, Matthias; Ganslandt, Thomas; Lotzmann, Lydia; Mate, Sebastian; Christoph, Jan; Baum, Benjamin; Sariyar, Murat; Wu, Jie; Stäubert, Sebastian

    2016-01-01

    Due to the specific needs of biomedical researchers, in-house development of software is widespread. A common problem is to maintain and enhance software after the funded project has ended. Even if many tools are made open source, only a couple of projects manage to attract a user basis large enough to ensure sustainability. Reasons for this include complex installation and configuration of biomedical software as well as an ambiguous terminology of the features provided; all of which make evaluation of software laborious. Docker is a para-virtualization technology based on Linux containers that eases deployment of applications and facilitates evaluation. We investigated a suite of software developments funded by a large umbrella organization for networked medical research within the last 10 years and created Docker containers for a number of applications to support utilization and dissemination.

  6. Additional deployment of ocean bottom gravity meter and ocean bottom electro-magnetic meter for the multidisciplinary cabled observation in Sagami Bay, Japan

    NASA Astrophysics Data System (ADS)

    Mitsuzawa, K.; Goto, T.; Araki, E.; Watanabe, T.; Sugioka, H.; Kasaya, T.; Sayanagi, K.; Mikada, H.; Fujimoto, H.; Nagao, T.; Koizumi, K.; Asakawa, K.

    2005-12-01

    Western part of the Sagami Bay central Pacific side of Japan, is known as one of the high active tectonic areas. In this area, Teishi Knoll, volcanic seamount, erupted in 1989 and the earthquake swarms occurs repeatedly every few years in the eastern coast of the Izu Peninsula. The real-time deep sea floor observatory was deployed about 7 km off Hatsushima Island, Sagami Bay, at a depth of 1174 m in 1993 to monitor seismic activities, underwater pressure, water temperature and deep currents. The video camera and lights were also mounted in the observatory to monitor the relations among biological activities associated with the tectonic activities. The observation system including submarine electro-optical cable with a length of 8 km was completely renewed in 2000. The several underwater-mateable connectors are installed in the new observatory for additional observation instruments. A precise pressure sensor, ocean bottom gravity meter and ocean bottom electro-magnetic meter were installed using ROV Hyper-Dolphin in the cruise of R/V Natsushima from January 9 to 14, 2005. We start to operate them at February 10, 2005 after checking those of data qualities. We also installed an underwater internet interface, which is called Linux Box, as a prototype of underwater network system which was operated by Linux operating system. The Linux Box is a key network system for multidisciplinary observation network. It will be able to connect much kind of observation instruments as using internet connection. We put the precise pressure sensor as a sensor of the Linux Box in this experiment.

  7. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.

  8. LHCb Dockerized Build Environment

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Belin, M.; Closier, J.; Couturier, B.

    2017-10-01

    Used as lightweight virtual machines or as enhanced chroot environments, Linux containers, and in particular the Docker abstraction over them, are more and more popular in the virtualization communities. The LHCb Core Software team decided to investigate how to use Docker containers to provide stable and reliable build environments for the different supported platforms, including the obsolete ones which cannot be installed on modern hardware, to be used in integration builds, releases and by any developer. We present here the techniques and procedures set up to define and maintain the Docker images and how these images can be used to develop on modern Linux distributions for platforms otherwise not accessible.

  9. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    PubMed

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  10. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  11. PsyToolkit: a software package for programming psychological experiments using Linux.

    PubMed

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  12. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  13. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    ERIC Educational Resources Information Center

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  14. Community-driven computational biology with Debian Linux.

    PubMed

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  15. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  16. AirShow 1.0 CFD Software Users' Guide

    NASA Technical Reports Server (NTRS)

    Mohler, Stanley R., Jr.

    2005-01-01

    AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.

  17. Development of an e-VLBI Data Transport Software Suite with VDIF

    NASA Technical Reports Server (NTRS)

    Sekido, Mamoru; Takefuji, Kazuhiro; Kimura, Moritaka; Hobiger, Thomas; Kokado, Kensuke; Nozawa, Kentarou; Kurihara, Shinobu; Shinno, Takuya; Takahashi, Fujinobu

    2010-01-01

    We have developed a software library (KVTP-lib) for VLBI data transmission over the network with the VDIF (VLBI Data Interchange Format), which is the newly proposed standard VLBI data format designed for electronic data transfer over the network. The software package keeps the application layer (VDIF frame) and the transmission layer separate, so that each layer can be developed efficiently. The real-time VLBI data transmission tool sudp-send is an application tool based on the KVTP-lib library. sudp-send captures the VLBI data stream from the VSI-H interface with the K5/VSI PC-board and writes the data to file in standard Linux file format or transmits it to the network using the simple- UDP (SUDP) protocol. Another tool, sudp-recv , receives the data stream from the network and writes the data to file in a specific VLBI format (K5/VSSP, VDIF, or Mark 5B). This software system has been implemented on the Wettzell Tsukuba baseline; evaluation before operational employment is under way.

  18. GUI implementation of image encryption and decryption using Open CV-Python script on secured TFTP protocol

    NASA Astrophysics Data System (ADS)

    Reddy, K. Rasool; Rao, Ch. Madhava

    2018-04-01

    Currently safety is one of the primary concerns in the transmission of images due to increasing the use of images within the industrial applications. So it's necessary to secure the image facts from unauthorized individuals. There are various strategies are investigated to secure the facts. In that encryption is certainly one of maximum distinguished method. This paper gives a sophisticated Rijndael (AES) algorithm to shield the facts from unauthorized humans. Here Exponential Key Change (EKE) concept is also introduced to exchange the key between client and server. The things are exchange in a network among client and server through a simple protocol is known as Trivial File Transfer Protocol (TFTP). This protocol is used mainly in embedded servers to transfer the data and also provide protection to the data if protection capabilities are integrated. In this paper, implementing a GUI environment for image encryption and decryption. All these experiments carried out on Linux environment the usage of Open CV-Python script.

  19. BpWrapper: BioPerl-based sequence and tree utilities for rapid prototyping of bioinformatics pipelines.

    PubMed

    Hernández, Yözen; Bernstein, Rocky; Pagan, Pedro; Vargas, Levy; McCaig, William; Ramrattan, Girish; Akther, Saymon; Larracuente, Amanda; Di, Lia; Vieira, Filipe G; Qiu, Wei-Gang

    2018-03-02

    Automated bioinformatics workflows are more robust, easier to maintain, and results more reproducible when built with command-line utilities than with custom-coded scripts. Command-line utilities further benefit by relieving bioinformatics developers to learn the use of, or to interact directly with, biological software libraries. There is however a lack of command-line utilities that leverage popular Open Source biological software toolkits such as BioPerl ( http://bioperl.org ) to make many of the well-designed, robust, and routinely used biological classes available for a wider base of end users. Designed as standard utilities for UNIX-family operating systems, BpWrapper makes functionality of some of the most popular BioPerl modules readily accessible on the command line to novice as well as to experienced bioinformatics practitioners. The initial release of BpWrapper includes four utilities with concise command-line user interfaces, bioseq, bioaln, biotree, and biopop, specialized for manipulation of molecular sequences, sequence alignments, phylogenetic trees, and DNA polymorphisms, respectively. Over a hundred methods are currently available as command-line options and new methods are easily incorporated. Performance of BpWrapper utilities lags that of precompiled utilities while equivalent to that of other utilities based on BioPerl. BpWrapper has been tested on BioPerl Release 1.6, Perl versions 5.10.1 to 5.25.10, and operating systems including Apple macOS, Microsoft Windows, and GNU/Linux. Release code is available from the Comprehensive Perl Archive Network (CPAN) at https://metacpan.org/pod/Bio::BPWrapper . Source code is available on GitHub at https://github.com/bioperl/p5-bpwrapper . BpWrapper improves on existing sequence utilities by following the design principles of Unix text utilities such including a concise user interface, extensive command-line options, and standard input/output for serialized operations. Further, dozens of novel methods for manipulation of sequences, alignments, and phylogenetic trees, unavailable in existing utilities (e.g., EMBOSS, Newick Utilities, and FAST), are provided. Bioinformaticians should find BpWrapper useful for rapid prototyping of workflows on the command-line without creating custom scripts for comparative genomics and other bioinformatics applications.

  20. Design and implementation of self-balancing coaxial two wheel robot based on HSIC

    NASA Astrophysics Data System (ADS)

    Hu, Tianlian; Zhang, Hua; Dai, Xin; Xia, Xianfeng; Liu, Ran; Qiu, Bo

    2007-12-01

    This thesis has studied the control problem concerning position and orientation control of self-balancing coaxial two wheel robot based on the human simulated intelligent control (HSIC) theory. Adopting Lagrange equation, the dynamic model of self-balancing coaxial two-wheel Robot is built up, and the Sensory-motor Intelligent Schemas (SMIS) of HSIC controller for the robot is designed by analyzing its movement and simulating the human controller. In robot's motion process, by perceiving position and orientation of the robot and using multi-mode control strategy based on characteristic identification, the HSIC controller enables the robot to control posture. Utilizing Matlab/Simulink, a simulation platform is established and a motion controller is designed and realized based on RT-Linux real-time operating system, employing high speed ARM9 processor S3C2440 as kernel of the motion controller. The effectiveness of the new design is testified by the experiment.

  1. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  2. Millisecond timing on PCs and Macs.

    PubMed

    MacInnes, W J; Taylor, T L

    2001-05-01

    A real-time, object-oriented solution for displaying stimuli on Windows 95/98, MacOS and Linux platforms is presented. The program, written in C++, utilizes a special-purpose window class (GLWindow), OpenGL, and 32-bit graphics acceleration; it avoids display timing uncertainty by substituting the new window class for the default window code for each system. We report the outcome of tests for real-time capability across PC and Mac platforms running a variety of operating systems. The test program, which can be used as a shell for programming real-time experiments and testing specific processors, is available at http://www.cs.dal.ca/~macinnwj. We propose to provide researchers with a sense of the usefulness of our program, highlight the ability of many multitasking environments to achieve real time, as well as caution users about systems that may not achieve real time, even under optimal conditions.

  3. Mapping RNA-seq Reads with STAR

    PubMed Central

    Dobin, Alexander; Gingeras, Thomas R.

    2015-01-01

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920

  4. Mapping RNA-seq Reads with STAR.

    PubMed

    Dobin, Alexander; Gingeras, Thomas R

    2015-09-03

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.

  5. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    NASA Technical Reports Server (NTRS)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  6. VizieR Online Data Catalog: FADO code (Gomes+, 2017)

    NASA Astrophysics Data System (ADS)

    Gomes, J. M.; Papaderos, P.

    2017-03-01

    FADO comes from the Latin word "fatum" that means fate or destiny. It is also a well known genre of Portuguese music, and by choosing this acronym for this spectral synthesis tool we would like to pay tribute to Portugal. The main goal of FADO is to explore the star-formation and chemical enrichment history (the "Fado") of galaxies based on two hitherto unique elements in spectral fitting models: a) self-consistency between the best-fitting star formation history (SFH) and the nebular characteristics of a galaxy (e.g., hydrogen Balmer-line luminosities and equivalent widths; shape of the nebular continuum, including the Balmer and Paschen discontinuity) and b) genetic optimization and artificial intelligence algorithms. This document is part of the FADO v.1 distribution package, which contains two different ascii files, ReadMe and Read_F, and one tarball archive FADOv1.tar.gz. FADOv1.tar.gz contains the binary (executable) compiled in both OpenSuSE 13.2 64bit LINUX (FADO) and MAC OS X (FADO_MACOSX). The former is compatible with most LINUX distributions, while the latter was only tested for Yosemite 10.10.3. It contains the configuration files for running FADO: FADO.config and PLOT.config, as well as the "Simple Stellar Population" (SSP) base library with the base file list Base.BC03.L, the FADO v.1 short manual Read_F and this file (in the ReadMe directory) and, for testing purposes, three characteristic de-redshifted spectra from SDSS-DR7 in ascii format, corresponding to a star-forming (spec1.txt), composite (spec2.txt) and LINER (spec3.txt) galaxy. Auxiliary files needed for execution of FADO (.HIfboundem.ascii, .HeIIfbound.ascii, .HeIfboundem.ascii, grfont.dat and grfont.txt) are also included in the tarball. By decompressing the tarball the following six directories are created: input, output, plots, ReadMe, SSPs and tables (see below for a brief explanation). (2 data files).

  7. Impact on TRMM Products of Conversion to Linux

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  8. NanoPack: visualizing and processing long read sequencing data.

    PubMed

    De Coster, Wouter; D'Hert, Svenn; Schultz, Darrin T; Cruts, Marc; Van Broeckhoven, Christine

    2018-03-14

    Here we describe NanoPack, a set of tools developed for visualization and processing of long read sequencing data from Oxford Nanopore Technologies and Pacific Biosciences. The NanoPack tools are written in Python3 and released under the GNU GPL3.0 License. The source code can be found at https://github.com/wdecoster/nanopack, together with links to separate scripts and their documentation. The scripts are compatible with Linux, Mac OS and the MS Windows 10 subsystem for Linux and are available as a graphical user interface, a web service at http://nanoplot.bioinf.be and command line tools. wouter.decoster@molgen.vib-ua.be. Supplementary tables and figures are available at Bioinformatics online.

  9. Real-time head movement system and embedded Linux implementation for the control of power wheelchairs.

    PubMed

    Nguyen, H T; King, L M; Knight, G

    2004-01-01

    Mobility has become very important for our quality of life. A loss of mobility due to an injury is usually accompanied by a loss of self-confidence. For many individuals, independent mobility is an important aspect of self-esteem. Head movement is a natural form of pointing and can be used to directly replace the joystick whilst still allowing for similar control. Through the use of embedded LINUX and artificial intelligence, a hands-free head movement wheelchair controller has been designed and implemented successfully. This system provides for severely disabled users an effective power wheelchair control method with improved posture, ease of use and attractiveness.

  10. Real-time Experiment Interface for Biological Control Applications

    PubMed Central

    Lin, Risa J.; Bettencourt, Jonathan; White, John A.; Christini, David J.; Butera, Robert J.

    2013-01-01

    The Real-time Experiment Interface (RTXI) is a fast and versatile real-time biological experimentation system based on Real-Time Linux. RTXI is open source and free, can be used with an extensive range of experimentation hardware, and can be run on Linux or Windows computers (when using the Live CD). RTXI is currently used extensively for two experiment types: dynamic patch clamp and closed-loop stimulation pattern control in neural and cardiac single cell electrophysiology. RTXI includes standard plug-ins for implementing commonly used electrophysiology protocols with synchronized stimulation, event detection, and online analysis. These and other user-contributed plug-ins can be found on the website (http://www.rtxi.org). PMID:21096883

  11. Software for Processing of Digitized Astronegatives from Archives and Databases of Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu. I.; Andruk, V. N.; Kazantseva, L. V.

    The paper discusses and illustrates the steps of basic processing of digitized image of astro negatives. Software for obtaining of a rectangular coordinates and photometric values of objects on photographic plates was created in the environment LINUX / MIDAS / ROMAFOT. The program can automatically process the specified number of files in FITS format with sizes up to 20000 x 20000 pixels. Other programs were made in FORTRAN and PASCAL with the ability to work in an environment of LINUX or WINDOWS. They were used for: identification of stars, separation and exclusion of diffraction satellites and double and triple exposures, elimination of image defects, reduction to the equatorial coordinates and magnitudes of a reference catalogs.

  12. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  13. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less

  14. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  15. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.

  16. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.

  17. Interplanetary Overlay Network Bundle Protocol Implementation

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The Interplanetary Overlay Network (ION) system's BP package, an implementation of the Delay-Tolerant Networking (DTN) Bundle Protocol (BP) and supporting services, has been specifically designed to be suitable for use on deep-space robotic vehicles. Although the ION BP implementation is unique in its use of zero-copy objects for high performance, and in its use of resource-sensitive rate control, it is fully interoperable with other implementations of the BP specification (Internet RFC 5050). The ION BP implementation is built using the same software infrastructure that underlies the implementation of the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP) built into the flight software of Deep Impact. It is designed to minimize resource consumption, while maximizing operational robustness. For example, no dynamic allocation of system memory is required. Like all the other ION packages, ION's BP implementation is designed to port readily between Linux and Solaris (for easy development and for ground system operations) and VxWorks (for flight systems operations). The exact same source code is exercised in both environments. Initially included in the ION BP implementations are the following: libraries of functions used in constructing bundle forwarders and convergence-layer (CL) input and output adapters; a simple prototype bundle forwarder and associated CL adapters designed to run over an IPbased local area network; administrative tools for managing a simple DTN infrastructure built from these components; a background daemon process that silently destroys bundles whose time-to-live intervals have expired; a library of functions exposed to applications, enabling them to issue and receive data encapsulated in DTN bundles; and some simple applications that can be used for system checkout and benchmarking.

  18. Assessment of steam-injected gas turbine systems and their potential application

    NASA Technical Reports Server (NTRS)

    Stochl, R. J.

    1982-01-01

    Results were arrived at by utilizing and expanding on information presented in the literature. The results were analyzed and compared with those for simple gas turbine and combined cycles for both utility power generation and industrial cogeneration applications. The efficiency and specific power of simple gas turbine cycles can be increased as much as 30 and 50 percent, respectively, by the injection of steam into the combustor. Steam-injected gas turbines appear to be economically competitive with both simple gas turbine and combined cycles for small, clean-fuel-fired utility power generation and industrial cogeneration applications. For large powerplants with integrated coal gasifiers, the economic advantages appear to be marginal.

  19. Cpp Utility - Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    III, FredOppel; Rigdon, J. Brian

    2014-09-08

    A collection of general Umbra modules that are reused by other Umbra libraries. These capabilities include line segments, file utilities, color utilities, string utilities (for std::string), list utilities (for std ::vector ), bounding box intersections, range limiters, simple filters, cubic roots solvers and a few other utilities.

  20. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis

    PubMed Central

    2011-01-01

    Background A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. Results The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. Conclusions With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites. PMID:21266047

  1. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    PubMed

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  2. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  3. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  4. High speed real-time wavefront processing system for a solid-state laser system

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing

    2008-03-01

    A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.

  5. Elegent—An elastic event generator

    NASA Astrophysics Data System (ADS)

    Kašpar, J.

    2014-03-01

    Although elastic scattering of nucleons may look like a simple process, it presents a long-lasting challenge for theory. Due to missing hard energy scale, the perturbative QCD cannot be applied. Instead, many phenomenological/theoretical models have emerged. In this paper we present a unified implementation of some of the most prominent models in a C++ library, moreover extended to account for effects of the electromagnetic interaction. The library is complemented with a number of utilities. For instance, programs to sample many distributions of interest in four-momentum transfer squared, t, impact parameter, b, and collision energy √{s}. These distributions at ISR, Spp¯S, RHIC, Tevatron and LHC energies are available for download from the project web site. Both in the form of ROOT files and PDF figures providing comparisons among the models. The package includes also a tool for Monte-Carlo generation of elastic scattering events, which can easily be embedded in any other program framework. Catalogue identifier: AERT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 10551 No. of bytes in distributed program, including test data, etc.: 126316 Distribution format: tar.gz Programming language: C++. Computer: Any in principle, tested on x86-64 architecture. Operating system: Any in principle, tested on GNU/Linux. RAM: Strongly depends on the task, but typically below 20MB Classification: 11.6. External routines: ROOT, HepMC Nature of problem: Monte-Carlo simulation of elastic nucleon-nucleon collisions Solution method: Implementation of some of the most prominent phenomenological/theoretical models providing cumulative distribution function that is used for random event generation. Running time: Strongly depends on the task, but typically below 1 h.

  6. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  7. LORES: Low resolution shape program for the calculation of small angle scattering profiles for biological macromolecules in solution

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Deyhim, A.; Krueger, S.; Gregurick, S. K.

    2005-08-01

    A program for determining the low resolution shape of biological macromolecules, based on the optimization of a small angle neutron scattering profile to experimental data, is presented. This program, termed LORES, relies on a Monte Carlo optimization procedure and will allow for multiple scattering length densities of complex structures. It is therefore more versatile than utilizing a form factor approach to produce low resolution structural models. LORES is easy to compile and use, and allows for structural modeling of biological samples in real time. To illustrate the effectiveness and versatility of the program, we present four specific biological examples, Apoferritin (shell model), Ribonuclease S (ellipsoidal model), a 10-mer dsDNA (duplex helix) and a construct of a 10-mer DNA/PNA duplex helix (heterogeneous structure). These examples are taken from protein and nucleic acid SANS studies, of both large and small scale structures. We find, in general, that our program will accurately reproduce the geometric shape of a given macromolecule, when compared with the known crystallographic structures. We also present results to illustrate the lower limit of the experimental resolution which the LORES program is capable of modeling. Program summaryTitle of program:LORES Catalogue identifier: ADVC Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVC Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:SGI Origin200, SGI Octane, SGI Linux, Intel Pentium PC Operating systems:UNIX64 6.5 and LINUX 2.4.7 Programming language used:C Memory required to execute with typical data:8 MB No. of lines in distributed program, including test data, etc.:2270 No. of bytes in distributed program, including test data, etc.:13 302 Distribution format:tar.gz External subprograms used:The entire code must be linked with the MATH library

  8. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  9. Arlequin suite ver 3.5: a new series of programs to perform population genetics analyses under Linux and Windows.

    PubMed

    Excoffier, Laurent; Lischer, Heidi E L

    2010-05-01

    We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.

  10. RTSPM: real-time Linux control software for scanning probe microscopy.

    PubMed

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Exercise environment for Introduction to Cyber Technologies class. This software is essentially a collection of short scripts, configuration files, and small executables that form the exercise component of the Sandia Cyber Technologies Academy's Introduction to Cyber Technologies class. It builds upon other open-source technologies, such as Debian Linux and minimega, to provide comprehensive Linux and networking exercises that make learning these topics exciting and fun. Sample exercises: a pre-built set of home directories the student must navigate through to learn about privilege escalation, the creation of a virtual network playground designed to teach the student about the resiliency of themore » Internet, and a two-hour Capture the Flag challenge for the final lesson. There are approximately thirty (30) exercises included for the students to complete as part of the course.« less

  12. a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.

  13. Ligand Depot: a data warehouse for ligands bound to macromolecules.

    PubMed

    Feng, Zukang; Chen, Li; Maddula, Himabindu; Akcan, Ozgur; Oughtred, Rose; Berman, Helen M; Westbrook, John

    2004-09-01

    Ligand Depot is an integrated data resource for finding information about small molecules bound to proteins and nucleic acids. The initial release (version 1.0, November, 2003) focuses on providing chemical and structural information for small molecules found as part of the structures deposited in the Protein Data Bank. Ligand Depot accepts keyword-based queries and also provides a graphical interface for performing chemical substructure searches. A wide variety of web resources that contain information on small molecules may also be accessed through Ligand Depot. Ligand Depot is available at http://ligand-depot.rutgers.edu/. Version 1.0 supports multiple operating systems including Windows, Unix, Linux and the Macintosh operating system. The current drawing tool works in Internet Explorer, Netscape and Mozilla on Windows, Unix and Linux.

  14. Community-driven computational biology with Debian Linux

    PubMed Central

    2010-01-01

    Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984

  15. Universal distribution of component frequencies in biological and technological systems

    PubMed Central

    Pang, Tin Yau; Maslov, Sergei

    2013-01-01

    Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195

  16. BeBot: A Modular Mobile Miniature Robot Platform Supporting Hardware Reconfiguration and Multi-standard Communication

    NASA Astrophysics Data System (ADS)

    Herbrechtsmeier, Stefan; Witkowski, Ulf; Rückert, Ulrich

    Mobile robots become more and more important in current research and education. Especially small ’on the table’ experiments attract interest, because they need no additional or special laboratory equipments. In this context platforms are desirable which are small, simple to access and relatively easy to program. An additional powerful information processing unit is advantageous to simplify the implementation of algorithm and the porting of software from desktop computers to the robot platform. In this paper we present a new versatile miniature robot that can be ideally used for research and education. The small size of the robot of about 9 cm edge length, its robust drive and its modular structure make the robot a general device for single and multi-robot experiments executed ’on the table’. For programming and evaluation the robot can be wirelessly connected via Bluetooth or WiFi. The operating system of the robot is based on the standard Linux kernel and the GNU C standard library. A player/stage model eases software development and testing.

  17. GOssTo: a stand-alone application and a web tool for calculating semantic similarities on the Gene Ontology.

    PubMed

    Caniza, Horacio; Romero, Alfonso E; Heron, Samuel; Yang, Haixuan; Devoto, Alessandra; Frasca, Marco; Mesiti, Marco; Valentini, Giorgio; Paccanaro, Alberto

    2014-08-01

    We present GOssTo, the Gene Ontology semantic similarity Tool, a user-friendly software system for calculating semantic similarities between gene products according to the Gene Ontology. GOssTo is bundled with six semantic similarity measures, including both term- and graph-based measures, and has extension capabilities to allow the user to add new similarities. Importantly, for any measure, GOssTo can also calculate the Random Walk Contribution that has been shown to greatly improve the accuracy of similarity measures. GOssTo is very fast, easy to use, and it allows the calculation of similarities on a genomic scale in a few minutes on a regular desktop machine. alberto@cs.rhul.ac.uk GOssTo is available both as a stand-alone application running on GNU/Linux, Windows and MacOS from www.paccanarolab.org/gossto and as a web application from www.paccanarolab.org/gosstoweb. The stand-alone application features a simple and concise command line interface for easy integration into high-throughput data processing pipelines. © The Author 2014. Published by Oxford University Press.

  18. Google Sky: A Digital View of the Night Sky

    NASA Astrophysics Data System (ADS)

    Connolly, A. Scranton, R.; Ornduff, T.

    2008-11-01

    From its inception Astronomy has been a visual science, from careful observations of the sky using the naked eye, to the use of telescopes and photographs to map the distribution of stars and galaxies, to the current era of digital cameras that can image the sky over many decades of the electromagnetic spectrum. Sky in Google Earth (http://earth.google.com) and Google Sky (http://www.google.com/sky) continue this tradition, providing an intuitive visual interface to some of the largest astronomical imaging surveys of the sky. Streaming multi-color imagery, catalogs, time domain data, as well as annotating interesting astronomical sources and events with placemarks, podcasts and videos, Sky provides a panchromatic view of the universe accessible to anyone with a computer. Beyond a simple exploration of the sky Google Sky enables users to create and share content with others around the world. With an open interface available on Linux, Mac OS X and Windows, and translations of the content into over 20 different languages we present Sky as the embodiment of a virtual telescope for discovery and sharing the excitement of astronomy and science as a whole.

  19. Generation of non-genomic oligonucleotide tag sequences for RNA template-specific PCR

    PubMed Central

    Pinto, Fernando Lopes; Svensson, Håkan; Lindblad, Peter

    2006-01-01

    Background In order to overcome genomic DNA contamination in transcriptional studies, reverse template-specific polymerase chain reaction, a modification of reverse transcriptase polymerase chain reaction, is used. The possibility of using tags whose sequences are not found in the genome further improves reverse specific polymerase chain reaction experiments. Given the absence of software available to produce genome suitable tags, a simple tool to fulfill such need was developed. Results The program was developed in Perl, with separate use of the basic local alignment search tool, making the tool platform independent (known to run on Windows XP and Linux). In order to test the performance of the generated tags, several molecular experiments were performed. The results show that Tagenerator is capable of generating tags with good priming properties, which will deliberately not result in PCR amplification of genomic DNA. Conclusion The program Tagenerator is capable of generating tag sequences that combine genome absence with good priming properties for RT-PCR based experiments, circumventing the effects of genomic DNA contamination in an RNA sample. PMID:16820068

  20. WeBIAS: a web server for publishing bioinformatics applications.

    PubMed

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  1. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  2. Test Waveform Applications for JPL STRS Operating Environment

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Peters, Kenneth J.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.; Duncan, Courtney B.

    2013-01-01

    This software demonstrates use of the JPL Space Telecommunications Radio System (STRS) Operating Environment (OE), tests APIs (application programming interfaces) presented by JPL STRS OE, and allows for basic testing of the underlying hardware platform. This software uses the JPL STRS Operating Environment ["JPL Space Tele com - munications Rad io System Operating Environment,"(NPO-4776) NASA Tech Briefs, commercial edition, Vol. 37, No. 1 (January 2013), p. 47] to interact with the JPL-SDR Software Defined Radio developed for the CoNNeCT (COmmunications, Navigation, and Networking rEconfigurable Testbed) Project as part of the SCaN Testbed installed on the International Space Station (ISS). These are the first applications that are compliant with the new NASA STRS Architecture Standard. Several example waveform applications are provided to demonstrate use of the JPL STRS OE for the JPL-SDR platform used for the CoNNeCT Project. The waveforms provide a simple digitizer and playback capability for the SBand RF slice, and a simple digitizer for the GPS slice [CoNNeCT Global Positioning System RF Module, (NPO-47764) NASA Tech Briefs, commercial edition, Vol. 36, No. 3 (March 2012), p. 36]. These waveforms may be used for hardware test, as well as for on-orbit or laboratory checkout. Additional example waveforms implement SpaceWire and timer modules, which can be used for time transfer and demonstration of communication between the two Xilinx FPGAs in the JPLSDR. The waveforms are also compatible with ground-based use of the JPL STRS OE on radio breadboards and Linux.

  3. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data.

    PubMed

    Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  4. 10 Gigabit Ethernet Performance on SGI Altix and Origin Systems

    NASA Technical Reports Server (NTRS)

    Meyer, Andy

    2005-01-01

    As the state of high performance computing continues to advance, the size of datasets continue to grow, driving a need for high bandwidth data networks. family of networks. 10 Gigabit Ethernet is the latest step in the popular Ethernet We have evaluated the S2io Xframe 10 Gigabit Ethernet adapter on 512p SGI Altix systems running ProPack 3, and Origin systems running Irix 6.5.24 and 6.5.26 in our production supercomputing environment. We encountered a number of performance and stability issues, which were promptly dealt with by SGI and S2io. Using nttcp we tested TCP performance for single and multiple streams, and we tested file transfer using NFS and bbftp. We will present the results of our testing, including the effects of various tuning options on throughput and CPU utilization, and offer suggestions for configuring and tuning S2io 10 Gigabit Ethernet cards in an Altix/Linux or Origin/Irix environment.

  5. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  6. NBodyLab Simulation Experiments with GRAPE-6a AND MD-GRAPE2 Acceleration

    NASA Astrophysics Data System (ADS)

    Johnson, V.; Ates, A.

    2005-12-01

    NbodyLab is an astrophysical N-body simulation testbed for student research. It is accessible via a web interface and runs as a backend framework under Linux. NbodyLab can generate data models or perform star catalog lookups, transform input data sets, perform direct summation gravitational force calculations using a variety of integration schemes, and produce analysis and visualization output products. NEMO (Teuben 1994), a popular stellar dynamics toolbox, is used for some functions. NbodyLab integrators can optionally utilize two types of low-cost desktop supercomputer accelerators, the newly available GRAPE-6a (125 Gflops peak) and the MD-GRAPE2 (64-128 Gflops peak). The initial version of NBodyLab was presented at ADASS 2002. This paper summarizes software enhancements developed subsequently, focusing on GRAPE-6a related enhancements, and gives examples of computational experiments and astrophysical research, including star cluster and solar system studies, that can be conducted with the new testbed functionality.

  7. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  8. HARE: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. Themore » HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions.« less

  9. David Whiteside | NREL

    Science.gov Websites

    Whiteside David Whiteside HPC System Administrator David.Whiteside@nrel.gov | 303-275-3943 David . David has over 10 years of experience with Linux administration and a strong background in system

  10. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ronald W.

    With the addition of the 3D volume slicer widget, VERAView now relies on Mayavi and its dependents. Enthought's Canopy Python environment provides everything VERAView needs, and pre-built Canopy versions for Windows, Mac OSX, and Linux can be downloaded.

  12. HEP Computing

    Science.gov Websites

    Service Request Password Help New Users Back to HEP Computing Mail-Migration Procedure on Linux Mail -Migration Procedure on Windows How to Migrate a Folder to GMail using Pine U.S. Department of Energy The

  13. MISR Level 2 TOA/Cloud Versioning

    Atmospheric Science Data Center

    2017-10-11

    ... at this level. Software has been ported over to Linux. The Broadband Albedos have been fixed. New ancillary files: ... Difference Vectors implemented. Block Center Times for AN camera added to product. New ancillary files: ...

  14. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  15. Enhancements to the Sentinel Fireball Network Video Software

    NASA Astrophysics Data System (ADS)

    Watson, Wayne

    2009-05-01

    The Sentinel Fireball Network that supports meteor imaging of bright meteors (fireballs) has been in existence for over ten years. Nearly five years ago it moved from gathering meteor data with a camera and VCR video tape to a fisheye lens attached to a hardware device, the Sentinel box, which allowed meteor data to be recorded on a PC operating under real-time Linux. In 2006, that software, sentuser, was made available on Apple, Linux, and Window operating systems using the Python computer language. It provides basic video and management functionality and a small amount of analytic software capability. This paper describes the new and attractive future features of the software, and, additionally, it reviews some of the research and networks from the past and present using video equipment to collect and analyze fireball data that have applicability to sentuser.

  16. Limits, discovery and cut optimization for a Poisson process with uncertainty in background and signal efficiency: TRolke 2.0

    NASA Astrophysics Data System (ADS)

    Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.

    2010-03-01

    A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.

  17. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Gerry; et al.

    The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version ofmore » the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.« less

  19. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. NGSANE: a lightweight production informatics framework for high-throughput data analysis.

    PubMed

    Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C

    2014-05-15

    The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.

  1. Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.

    PubMed

    Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M

    2015-01-01

    The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.

  2. Dugong: a Docker image, based on Ubuntu Linux, focused on reproducibility and replicability for bioinformatics analyses.

    PubMed

    Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R

    2018-02-01

    This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.nunes@ufabc.edu.br. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  4. High-performance data processing using distributed computing on the SOLIS project

    NASA Astrophysics Data System (ADS)

    Wampler, Stephen

    2002-12-01

    The SOLIS solar telescope collects data at a high rate, resulting in 500 GB of raw data each day. The SOLIS Data Handling System (DHS) has been designed to quickly process this data down to 156 GB of reduced data. The DHS design uses pools of distributed reduction processes that are allocated to different observations as needed. A farm of 10 dual-cpu Linux boxes contains the pools of reduction processes. Control is through CORBA and data is stored on a fibre channel storage area network (SAN). Three other Linux boxes are responsible for pulling data from the instruments using SAN-based ringbuffers. Control applications are Java-based while the reduction processes are written in C++. This paper presents the overall design of the SOLIS DHS and provides details on the approach used to control the pooled reduction processes. The various strategies used to manage the high data rates are also covered.

  5. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring

    PubMed Central

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-01-01

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394

  6. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  7. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    PubMed

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-08-19

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro.

  8. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    NASA Astrophysics Data System (ADS)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  9. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware

    PubMed Central

    Zheng, Da; Burns, Randal; Szalay, Alexander S.

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052

  10. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.

    PubMed

    Zheng, Da; Burns, Randal; Szalay, Alexander S

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

  11. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  12. Initial nitrogen enrichment conditions determines variations in nitrogen substrate utilization by heterotrophic bacterial isolates.

    PubMed

    Ghosh, Suchismita; Ayayee, Paul A; Valverde-Barrantes, Oscar J; Blackwood, Christopher B; Royer, Todd V; Leff, Laura G

    2017-04-04

    The nitrogen (N) cycle consists of complex microbe-mediated transformations driven by a variety of factors, including diversity and concentrations of N compounds. In this study, we examined taxonomic diversity and N substrate utilization by heterotrophic bacteria isolated from streams under complex and simple N-enrichment conditions. Diversity estimates differed among isolates from the enrichments, but no significant composition were detected. Substrate utilization and substrate range of bacterial assemblages differed within and among enrichments types, and not simply between simple and complex N-enrichments. N substrate use patterns differed between isolates from some complex and simple N-enrichments while others were unexpectedly similar. Taxonomic composition of isolates did not differ among enrichments and was unrelated to N use suggesting strong functional redundancy. Ultimately, our results imply that the available N pool influences physiology and selects for bacteria with various abilities that are unrelated to their taxonomic affiliation.

  13. System Security Authorization Agreement (SSAA) for the WIRE Archive and Research Facility

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Wide-Field Infrared Explorer (WIRE) Archive and Research Facility (WARF) is operated and maintained by the Department of Physics, USAF Academy. The lab is located in Fairchild Hall, 2354 Fairchild Dr., Suite 2A103, USAF Academy, CO 80840. The WARF will be used for research and education in support of the NASA Wide Field Infrared Explorer (WIRE) satellite, and for related high-precision photometry missions and activities. The WARF will also contain the WIRE preliminary and final archives prior to their delivery to the National Space Science Data Center (NSSDC). The WARF consists of a suite of equipment purchased under several NASA grants in support of WIRE research. The core system consists of a Red Hat Linux workstation with twin 933 MHz PIII processors, 1 GB of RAM, 133 GB of hard disk space, and DAT and DLT tape drives. The WARF is also supported by several additional networked Linux workstations. Only one of these (an older 450 Mhz PIII computer running Red Hat Linux) is currently running, but the addition of several more is expected over the next year. In addition, a printer will soon be added. The WARF will serve as the primary research facility for the analysis and archiving of data from the WIRE satellite, together with limited quantities of other high-precision astronomical photometry data from both ground- and space-based facilities. However, the archive to be created here will not be the final archive; rather, the archive will be duplicated at the NSSDC and public access to the data will generally take place through that site.

  14. Specifying Quality of Service for Distributed Systems Based Upon Behavior Models

    DTIC Science & Technology

    2002-06-01

    Leave blank) 2. REPORT DATE June 2002 3 . REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE: Title (Mix case letters) Specifying...213 3 . Linux .................................................................................................214 5. Ensemble...FIGURE 3 . RESOURCE MANAGEMENT SCOPE............................................................................................................. 18

  15. NUMERICAL NOISE PM SIMULATION IN CMAQ

    EPA Science Inventory

    We have found that numerical noise in the latest release of CMAQ using the yamo advection scheme when compiled on Linux cluster with pgf90 (5.0 or 6.0). We recommend to use -C option to eliminate the numerical noise.

  16. cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.

    PubMed

    Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro

    2016-01-01

    Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.

  17. YAHA: fast and flexible long-read alignment with optimal breakpoint detection.

    PubMed

    Faust, Gregory G; Hall, Ira M

    2012-10-01

    With improved short-read assembly algorithms and the recent development of long-read sequencers, split mapping will soon be the preferred method for structural variant (SV) detection. Yet, current alignment tools are not well suited for this. We present YAHA, a fast and flexible hash-based aligner. YAHA is as fast and accurate as BWA-SW at finding the single best alignment per query and is dramatically faster and more sensitive than both SSAHA2 and MegaBLAST at finding all possible alignments. Unlike other aligners that report all, or one, alignment per query, or that use simple heuristics to select alignments, YAHA uses a directed acyclic graph to find the optimal set of alignments that cover a query using a biologically relevant breakpoint penalty. YAHA can also report multiple mappings per defined segment of the query. We show that YAHA detects more breakpoints in less time than BWA-SW across all SV classes, and especially excels at complex SVs comprising multiple breakpoints. YAHA is currently supported on 64-bit Linux systems. Binaries and sample data are freely available for download from http://faculty.virginia.edu/irahall/YAHA. imh4y@virginia.edu.

  18. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  19. LiGRO: a graphical user interface for protein-ligand molecular dynamics.

    PubMed

    Kagami, Luciano Porto; das Neves, Gustavo Machado; da Silva, Alan Wilter Sousa; Caceres, Rafael Andrade; Kawano, Daniel Fábio; Eifler-Lima, Vera Lucia

    2017-10-04

    To speed up the drug-discovery process, molecular dynamics (MD) calculations performed in GROMACS can be coupled to docking simulations for the post-screening analyses of large compound libraries. This requires generating the topology of the ligands in different software, some basic knowledge of Linux command lines, and a certain familiarity in handling the output files. LiGRO-the python-based graphical interface introduced here-was designed to overcome these protein-ligand parameterization challenges by allowing the graphical (non command line-based) control of GROMACS (MD and analysis), ACPYPE (ligand topology builder) and PLIP (protein-binder interactions monitor)-programs that can be used together to fully perform and analyze the outputs of complex MD simulations (including energy minimization and NVT/NPT equilibration). By allowing the calculation of linear interaction energies in a simple and quick fashion, LiGRO can be used in the drug-discovery pipeline to select compounds with a better protein-binding interaction profile. The design of LiGRO allows researchers to freely download and modify the software, with the source code being available under the terms of a GPLv3 license from http://www.ufrgs.br/lasomfarmacia/ligro/ .

  20. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  1. In silico reconstitution of Listeria propulsion exhibits nano-saltation.

    PubMed

    Alberts, Jonathan B; Odell, Garrett M

    2004-12-01

    To understand how the actin-polymerization-mediated movements in cells emerge from myriad individual protein-protein interactions, we developed a computational model of Listeria monocytogenes propulsion that explicitly simulates a large number of monomer-scale biochemical and mechanical interactions. The literature on actin networks and L. monocytogenes motility provides the foundation for a realistic mathematical/computer simulation, because most of the key rate constants governing actin network dynamics have been measured. We use a cluster of 80 Linux processors and our own suite of simulation and analysis software to characterize salient features of bacterial motion. Our "in silico reconstitution" produces qualitatively realistic bacterial motion with regard to speed and persistence of motion and actin tail morphology. The model also produces smaller scale emergent behavior; we demonstrate how the observed nano-saltatory motion of L. monocytogenes,in which runs punctuate pauses, can emerge from a cooperative binding and breaking of attachments between actin filaments and the bacterium. We describe our modeling methodology in detail, as it is likely to be useful for understanding any subcellular system in which the dynamics of many simple interactions lead to complex emergent behavior, e.g., lamellipodia and filopodia extension, cellular organization, and cytokinesis.

  2. How do I resolve problems reading the binary data?

    Atmospheric Science Data Center

    2014-12-08

    ... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...

  3. WASP8 Download

    EPA Pesticide Factsheets

    All of the WASP Installers are listed below. There is a 64 Bit Windows Installer, 64 Bit Mac OS X (Yosemite or Higher), 64 Bit Linux (Built on Ubuntu). You will need to have knowledge on how to install software on your target operating system.

  4. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov Websites

    and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac

  5. NEMiD: a web-based curated microbial diversity database with geo-based plotting.

    PubMed

    Bhattacharjee, Kaushik; Joshi, Santa Ram

    2014-01-01

    The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/.

  6. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    NASA Astrophysics Data System (ADS)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  7. Using Tablet for visual exploration of second-generation sequencing data.

    PubMed

    Milne, Iain; Stephen, Gordon; Bayer, Micha; Cock, Peter J A; Pritchard, Leighton; Cardle, Linda; Shaw, Paul D; Marshall, David

    2013-03-01

    The advent of second-generation sequencing (2GS) has provided a range of significant new challenges for the visualization of sequence assemblies. These include the large volume of data being generated, short-read lengths and different data types and data formats associated with the diversity of new sequencing technologies. This article illustrates how Tablet-a high-performance graphical viewer for visualization of 2GS assemblies and read mappings-plays an important role in the analysis of these data. We present Tablet, and through a selection of use cases, demonstrate its value in quality assurance and scientific discovery, through features such as whole-reference coverage overviews, variant highlighting, paired-end read mark-up, GFF3-based feature tracks and protein translations. We discuss the computing and visualization techniques utilized to provide a rich and responsive graphical environment that enables users to view a range of file formats with ease. Tablet installers can be freely downloaded from http://bioinf.hutton.ac.uk/tablet in 32 or 64-bit versions for Windows, OS X, Linux or Solaris. For further details on the Tablet, contact tablet@hutton.ac.uk.

  8. NEMiD: A Web-Based Curated Microbial Diversity Database with Geo-Based Plotting

    PubMed Central

    Bhattacharjee, Kaushik; Joshi, Santa Ram

    2014-01-01

    The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/. PMID:24714636

  9. Performance evaluation of multi-channel wireless mesh networks with embedded systems.

    PubMed

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN).

  10. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  11. Serial Interface through Stream Protocol on EPICS Platform for Distributed Control and Monitoring

    NASA Astrophysics Data System (ADS)

    Das Gupta, Arnab; Srivastava, Amit K.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    Remote operation of any equipment or device is implemented in distributed systems in order to control and proper monitoring of process values. For such remote operations, Experimental Physics and Industrial Control System (EPICS) is used as one of the important software tool for control and monitoring of a wide range of scientific parameters. A hardware interface is developed for implementation of EPICS software so that different equipment such as data converters, power supplies, pump controllers etc. could be remotely operated through stream protocol. EPICS base was setup on windows as well as Linux operating system for control and monitoring while EPICS modules such as asyn and stream device were used to interface the equipment with standard RS-232/RS-485 protocol. Stream Device protocol communicates with the serial line with an interface to asyn drivers. Graphical user interface and alarm handling were implemented with Motif Editor and Display Manager (MEDM) and Alarm Handler (ALH) command line channel access utility tools. This paper will describe the developed application which was tested with different equipment and devices serially interfaced to the PCs on a distributed network.

  12. Software for Displaying Data from Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert

    2003-01-01

    Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.

  13. MSTor: A program for calculating partition functions, free energies, enthalpies, entropies, and heat capacities of complex molecules including torsional anharmonicity

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.

    2012-08-01

    We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.

  14. XVD Image Display Program

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Andres, Paul M.; Mortensen, Helen B.; Parizher, Vadim; McAuley, Myche; Bartholomew, Paul

    2009-01-01

    The XVD [X-Windows VICAR (video image communication and retrieval) Display] computer program offers an interactive display of VICAR and PDS (planetary data systems) images. It is designed to efficiently display multiple-GB images and runs on Solaris, Linux, or Mac OS X systems using X-Windows.

  15. 75 FR 47609 - U.S. Customs and Border Protection; Notice of Issuance of Final Determination Concerning a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... Denver, Colorado. Communication Manager is designed to run on a variety of Linux-based media servers.... Some servers are in the form of blades. These are cards (similar to printed circuit cards with...

  16. Unix survival guide.

    PubMed

    Stein, Lincoln D

    2007-01-01

    For a mixture of historical and practical reasons, much of the bioinformatics software discussed in this series runs on Linux, Mac OSX, Solaris, or one of the many other Unix variants. This appendix provides the novice with easy-to-understand information needed to survive in the Unix environment.

  17. Using ParaView Software on the Peregrine System | High-Performance

    Science.gov Websites

    come pre-installed on most Linux and Mac systems. On Windows the ssh and terminal functions are provided by the programs plink.exe and cmd.exe, of which only cmd.exe will come pre-installed. The ssh

  18. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov Websites

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  19. Student’s Entrepreneur Model Development in Creative Industry through Utilization of Web Development Software and Educational Game

    NASA Astrophysics Data System (ADS)

    Hasan, B.; Hasbullah, H.; Elvyanti, S.; Purnama, W.

    2018-02-01

    The creative industry is the utilization of creativity, skill and talent of individuals to create wealth and jobs by generating and exploiting creativity power of individual. In the field of design, utilization of information technology can spur creative industry, development of creative industry design will accommodate a lot of creative energy that can pour their ideas and creativity without limitations. Open Source software is a trend in the field of information technology has developed since the 1990s. Examples of applications developed by the Open Source approach is the Apache web services, Linux and Android Operating System, the MySQL database. This community service activities based entrepreneurship aims to: 1). give an idea about the profile of the UPI student’s knowledge of entrepreneurship about the business based creative industries in software by using web software development and educational game 2) create a model for fostering entrepreneurship based on the creative industries in software by leveraging web development and educational games, 3) conduct training and guidance on UPI students who want to develop business in the field of creative industries engaged in the software industry . PKM-based entrepreneurship activity was attended by about 35 students DPTE FPTK UPI had entrepreneurial high interest and competence in information technology. Outcome generated from PKM entrepreneurship is the emergence of entrepreneurs from the students who are interested in the creative industry in the field of software which is able to open up business opportunities for themselves and others. Another outcome of this entrepreneurship PKM activity is the publication of articles or scientific publications in journals of national/international indexed.

  20. Upgrading Custom Simulink Library Components for Use in Newer Versions of Matlab

    NASA Technical Reports Server (NTRS)

    Stewart, Camiren L.

    2014-01-01

    The Spaceport Command and Control System (SCCS) at Kennedy Space Center (KSC) is a control system for monitoring and launching manned launch vehicles. Simulations of ground support equipment (GSE) and the launch vehicle systems are required throughout the life cycle of SCCS to test software, hardware, and procedures to train the launch team. The simulations of the GSE at the launch site in conjunction with off-line processing locations are developed using Simulink, a piece of Commercial Off-The-Shelf (COTS) software. The simulations that are built are then converted into code and ran in a simulation engine called Trick, a Government off-the-shelf (GOTS) piece of software developed by NASA. In the world of hardware and software, it is not uncommon to see the products that are utilized be upgraded and patched or eventually fade away into an obsolete status. In the case of SCCS simulation software, Matlab, a MathWorks product, has released a number of stable versions of Simulink since the deployment of the software on the Development Work Stations in the Linux environment (DWLs). The upgraded versions of Simulink has introduced a number of new tools and resources that, if utilized fully and correctly, will save time and resources during the overall development of the GSE simulation and its correlating documentation. Unfortunately, simply importing the already built simulations into the new Matlab environment will not suffice as it will produce results that may not be expected as they were in the version that is currently being utilized. Thus, an upgrade execution plan was developed and executed to fully upgrade the simulation environment to one of the latest versions of Matlab.

  1. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  2. CompactPCI/Linux Platform in FTU Slow Control System

    NASA Astrophysics Data System (ADS)

    Iannone, F.; Wang, L.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.

    2004-12-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called `Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipments; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, we have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself.

  3. A dynamic kernel modifier for linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor highmore » resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.« less

  4. Gaining Cyber Dominance

    DTIC Science & Technology

    2015-01-01

    Robust team exercise and simulation • Air-gapped; isolation from production networks • “Train as you fight” scenarios • Advanced user and Internet...Security Onion • SIFT (Linux/Windows) • Kali • Rucksack • Docker • VTS 18 GCD Overview January 2015 © 2014 Carnegie Mellon University TEXN Architecture

  5. Performance Analysis of the ARL Linux Networx Cluster

    DTIC Science & Technology

    2004-06-01

    OVERFLOW, used processors selected by SGE. All benchmarks on the GAMESS, COBALT, LSDYNA and FLUENT. Each code Origin 3800 were executed using IRIX cpusets...scheduler. for these benchmarks defines a missile with grid fins consisting of seventeen million cells [31. 4. Application Performance Results and

  6. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  7. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    PubMed

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  8. CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.

    PubMed

    Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav

    2017-11-15

    CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    PubMed

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  10. BioSMACK: a linux live CD for genome-wide association analyses.

    PubMed

    Hong, Chang Bum; Kim, Young Jin; Moon, Sanghoon; Shin, Young-Ah; Go, Min Jin; Kim, Dong-Joon; Lee, Jong-Young; Cho, Yoon Shin

    2012-01-01

    Recent advances in high-throughput genotyping technologies have enabled us to conduct a genome-wide association study (GWAS) on a large cohort. However, analyzing millions of single nucleotide polymorphisms (SNPs) is still a difficult task for researchers conducting a GWAS. Several difficulties such as compatibilities and dependencies are often encountered by researchers using analytical tools, during the installation of software. This is a huge obstacle to any research institute without computing facilities and specialists. Therefore, a proper research environment is an urgent need for researchers working on GWAS. We developed BioSMACK to provide a research environment for GWAS that requires no configuration and is easy to use. BioSMACK is based on the Ubuntu Live CD that offers a complete Linux-based operating system environment without installation. Moreover, we provide users with a GWAS manual consisting of a series of guidelines for GWAS and useful examples. BioSMACK is freely available at http://ksnp.cdc. go.kr/biosmack.

  11. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  12. PhAst: A Flexible IDL Astronomical Image Viewer

    NASA Astrophysics Data System (ADS)

    Rehnberg, Morgan; Crawford, R.; Trueblood, M.; Mighell, K.

    2012-01-01

    We present near-Earth asteroid data analyzed with PhAst, a new IDL astronomical image viewer based on the existing application ATV. PhAst opens, displays, and analyzes an arbitrary number of FITS images. Analysis packages include image calibration, photometry, and astrometry (provided through an interface with SExtractor, SCAMP, and missFITS). PhAst has been designed to generate reports for Minor Planet Center reporting. PhAst is cross platform (Linux/Mac OSX/Windows for image viewing and Linux/Mac OSX for image analysis) and can be downloaded from the following website at NOAO: http://www.noao.edu/staff/mighell/phast/. Rehnberg was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program and the Department of Defense ASSURE program through Scientific Program Order No. 13 (AST-0754223) of the Cooperative Agreement No. AST-0132798 between the Association of Universities for Research in Astronomy (AURA) and the NSF.

  13. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  14. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  15. Open source clustering software.

    PubMed

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  16. Data Collection and Recording on the Wisconsin/GSFC X-ray Quantum Calorimeter

    NASA Astrophysics Data System (ADS)

    O'Neill, Laura; X-ray Astrophysics Group at the University of Wisconsin-Madison

    2016-01-01

    The Wisconsin/GSFC X-ray Quantum Calorimeter (XQC) is an astronomical X-ray sounding rocket payload which uses a micro-calorimeter array to detect low (less than1keV) X-rays. Three different devices were evaluated to upgrade XQC's data collection and recording system. The system takes incoming data from XQC's pixel sensors and stores it to a memory card. The upgrade is a much smaller board and much more compact storage device. The Terasic DE0-Nano, Terasic DE0-Nano SoC, and the BeagleBone Black were tested to determine which would suit the needs of XQC best. The device needed to take incoming data, store it to an SD card, and be able to output it through a USB connection. The Terasic DE0-Nano is a simple FPGA, but needed some peripheral additions for an SD card slot and USB readout. The Terasic DE0-Nano SoC was a powerful FPGA and hard processor running Linux combined. It was able to do what was needed, but pulled too much power in the process. The BeagleBone Black had a microcontroller and also ran Linux. This last device ended up being the best choice, as it did not require too much power and had a very easy system already in place for USB readout. The only difficulty to deal with was programming the microcontroller in assembly language. This device is necessary due to the telemetry on XQC not being able to send all of the data down during the flight. It records valuable data about low energy X-rays so that the X-ray Astrophysics Groups at the University of Wisconsin-Madison and Goddard Space Flight Center can analyze and resolve the spectrum of the soft X-ray background.Later, using the digital logic on a Terasic DE0-Nano FPGA, a data simulator for the BeagleBone Black data collection and recording device was created. Programmed with Quartus II, the simulator uses basic digital logic components to fabricate trackable data signals and related timing signals to send to the data management device, as well as other timing signals that are asynchronous to the rest of the circuit, a failsafe enable for outputs, and several user feedback components

  17. Logging utilization in Idaho: Current and past trends

    Treesearch

    Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson

    2014-01-01

    A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...

  18. The formation of stable pH gradients with weak monovalent buffers for isoelectric focusing in free solution

    NASA Technical Reports Server (NTRS)

    Mosher, Richard A.; Thormann, Wolfgang; Graham, Aly; Bier, Milan

    1985-01-01

    Two methods which utilize simple buffers for the generation of stable pH gradients (useful for preparative isoelectric focusing) are compared and contrasted. The first employs preformed gradients comprised of two simple buffers in density-stabilized free solution. The second method utilizes neutral membranes to isolate electrolyte reservoirs of constant composition from the separation column. It is shown by computer simulation that steady-state gradients can be formed at any pH range with any number of components in such a system.

  19. Commodity Cluster Computing for Remote Sensing Applications using Red Hat LINUX

    NASA Technical Reports Server (NTRS)

    Dorband, John

    2003-01-01

    Since 1994, we have been doing research at Goddard Space Flight Center on implementing a wide variety of applications on commodity based computing clusters. This talk is about these clusters and haw they are used on these applications including ones for remote sensing.

  20. Mars Science Laboratory Workstation Test Set

    NASA Technical Reports Server (NTRS)

    Henriquez, David A.; Canham, Timothy K.; Chang, Johnny T.; Villaume, Nathaniel

    2009-01-01

    The Mars Science Laboratory developed the Workstation TestSet (WSTS) is a computer program that enables flight software development on virtual MSL avionics. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC.

  1. Installation and Testing Instructions for the Sandia Automatic Report Generator (ARG).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clay, Robert L.

    Robert L. CLAY Sandia National Laboratories P.O. Box 969 Livermore, CA 94551, U.S.A. rlclay@sandia.gov In this report, we provide detailed and reproducible installation instructions of the Automatic Report Generator (ARG), for both Linux and macOS target platforms.

  2. Low-energy, low-budget sensor web enablement of an amateur weather station

    NASA Astrophysics Data System (ADS)

    Schmidt, G.; Herrnkind, S.; Klump, J.

    2008-12-01

    Sensor Web Enablement (OGC SWE) has developed in into a powerful concept with many potential applications in environmental monitoring and in other fields. This has spurred development of software applications for Sensor Observation Services (SOS), while the development of client applications still lags behind. Furthermore, the deployment of sensors in the field often places tight constraints on energy and bandwidth available for data capture and transmission. As a "proof of concept" we equipped an amateur weather station with low-budget, standard components to read the data from its base station and feed it into a sensor observation service using its standard web- service interface. We chose the weather station as an example because of its simple measured phenomena and its low data volume. As sensor observation service we chose the open source software package offered by the 52North consortium. Power consumption can be problematic when deploying a sensor platform in the field. Instead of a common PC we used a Network Storage Link Unit (NSLU2) with a Linux operating system, a configuration also known as "Debian SLUG". The power consumption of a "SLUG" is of the order of 2 to 5 Watt, compared to 40W in a small PC. The "SLUG" provides one ethernet and two USB ports, one used by its external USB hard-drive. This modular setup is open to modifications, for example the addition of a GSM modem for data transmission over a cellular telephone network. The simple setup, low price, low power consumption, and the low technological entry-level allow many potential uses of a "SLUG" in environmental sensor networks in research, education and citizen science. The use of a mature sensor observation service software allows an easy integration of monitoring networks with other web services.

  3. Power from Ocean Waves.

    ERIC Educational Resources Information Center

    Newman, J. N.

    1979-01-01

    Discussed is the utilization of surface ocean waves as a potential source of power. Simple and large-scale wave power devices and conversion systems are described. Alternative utilizations, environmental impacts, and future prospects of this alternative energy source are detailed. (BT)

  4. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are designed for Linux operating systems (OS), the arrival of the WindowsHPC 2008 OS provides the opportunity to evaluate the use of a new platform on which to develop and port climate and earth science models. In particular, we are evaluating Microsoft's Visual Studio Integrated Developer Environment to determine its appropriateness for the climate modeling community. In the initial phases of this project, we have ported GEOS-5, WRF, GISS ModelE, and GFS to Linux on a CX1 and are in the process of porting WRF and ModelE to WindowsHPC 2008. Initial tests on the CX1 Linux OS indicate favorable comparisons in terms of performance and consistency of scientific results when compared with experiments executed on NASA high end systems. As in the past, NASA's large clusters will continue to be an important part of our objectives. We envision a seamless environment in which an investigator performs model development and testing on a desktop system and can seamlessly transfer execution to supercomputer clusters for production.

  5. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  6. FEST-C 1.3 & 2.0 for CMAQ Bi-directional NH3, Crop Production, and SWAT Modeling

    EPA Science Inventory

    The Fertilizer Emission Scenario Tool for CMAQ (FEST-C) is developed in a Linux environment, a festc JAVA interface that integrates 14 tools and scenario management options facilitating land use/crop data processing for the Community Multiscale Air Quality (CMAQ) modeling system ...

  7. A Configuration Framework and Implementation for the Least Privilege Separation Kernel

    DTIC Science & Technology

    2010-12-01

    The Altova Web site states that virtualization software, Parallels for Mac and Wine , is required for running it on MacOS and RedHat Linux...University of Singapore Singapore 28. Tan Lai Poh National University of Singapore Singapore 29. Quek Chee Luan Defence Science & Technology Agency Singapore

  8. How to Build MCNP 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Jeffrey S.

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  9. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  10. Laboratory Diagnosis of Lassa Fever

    DTIC Science & Technology

    2017-01-30

    this type of document. You can upgrade to the latest version of Adobe Reader for Windows®, Mac, or Linux® by visiting http://www.adobe.com/ go ...reader_download. For more assistance with Adobe Reader visit http://www.adobe.com/ go /acrreader. Windows is either a registered trademark or a

  11. SpiceyPy, a Python Wrapper for SPICE

    NASA Astrophysics Data System (ADS)

    Annex, A.

    2017-06-01

    SpiceyPy is an open source Python wrapper for the NAIF SPICE toolkit. It is available for macOS, Linux, and Windows platforms and for Python versions 2.7.x and 3.x as well as Anaconda. SpiceyPy can be installed by running: “pip install spiceypy.”

  12. Scaling Concolic Execution of Binary Programs for Security Applications

    DTIC Science & Technology

    2013-08-01

    Byte Array “<><><>” “<>” (89 times) 4/5 672 1 Sendmail 2 struct passwd (Linux) (“”,“root”,0,0,“root”,“”,“”) (“”,“root”,0,0,“rootroo”,“”,“”) 1/1 526

  13. ExScal Backbone Network Architecture

    DTIC Science & Technology

    2005-01-01

    802.11 battery powered nodes was laid over the sensor network. We adopted the Stargate platform for the backbone tier to serve as the basis for...its head. XSS Hardware and Network: XSS stands for eXtreme Scaling Stargate . A stargate is a linux-based single board computer. It has a 400 MHz

  14. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…

  15. Assessing the Effect of Honeypots on Cyber-Attackers

    DTIC Science & Technology

    2006-12-01

    provide a process for modeling threats and decision cycles. The third to fifth sections provide the history and overview of honeypots and anti-honeypot...XP Professional with SP2 Data Capture (SUSE Linux 10) Primary Goal Store Snort Data Storage PostgreSQL 8.1.1 Fake Self-contained Honeynet (SUSE

  16. BrainPort(Registered trademark) Technology Tongue Interface Characterization

    DTIC Science & Technology

    2010-03-01

    22 boards in house. … … … … … Experiment Control Workstation HDA with 2000 to 20,000 electrodes ●●● TCP/IP 10/100 Linux-based Control...Defense Advanced Research Projects Agency HD High Density HDA High Density Array IOD Intra-Oral Device IRB Institutional Review Board Electrode

  17. Vectorized data acquisition and fast triple-correlation integrals for Fluorescence Triple Correlation Spectroscopy

    NASA Astrophysics Data System (ADS)

    Ridgeway, William K.; Millar, David P.; Williamson, James R.

    2013-04-01

    Fluorescence Correlation Spectroscopy (FCS) is widely used to quantify reaction rates and concentrations of molecules in vitro and in vivo. We recently reported Fluorescence Triple Correlation Spectroscopy (F3CS), which correlates three signals together instead of two. F3CS can analyze the stoichiometries of complex mixtures and detect irreversible processes by identifying time-reversal asymmetries. Here we report the computational developments that were required for the realization of F3CS and present the results as the Triple Correlation Toolbox suite of programs. Triple Correlation Toolbox is a complete data analysis pipeline capable of acquiring, correlating and fitting large data sets. Each segment of the pipeline handles error estimates for accurate error-weighted global fitting. Data acquisition was accelerated with a combination of off-the-shelf counter-timer chips and vectorized operations on 128-bit registers. This allows desktop computers with inexpensive data acquisition cards to acquire hours of multiple-channel data with sub-microsecond time resolution. Off-line correlation integrals were implemented as a two delay time multiple-tau scheme that scales efficiently with multiple processors and provides an unprecedented view of linked dynamics. Global fitting routines are provided to fit FCS and F3CS data to models containing up to ten species. Triple Correlation Toolbox is a complete package that enables F3CS to be performed on existing microscopes. Catalogue identifier: AEOP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50189 No. of bytes in distributed program, including test data, etc.: 6135283 Distribution format: tar.gz Programming language: C/Assembly. Computer: Any with GCC and library support. Operating system: Linux and OS X (data acq. for Linux only due to library availability), not tested on Windows. RAM: ≥512 MB. Classification: 16.4. External routines: NIDAQmx (National Instruments), Gnu Scientific Library, GTK+, PLplot (optional) Nature of problem: Fluorescence Triple Correlation Spectroscopy required three things: data acquisition at faster speeds than were possible without expensive custom hardware, triple-correlation routines that could process 1/2 TB data sets rapidly, and fitting routines capable of handling several to a hundred fit parameters and 14,000 + data points, each with error estimates. Solution method: A novel data acquisition concept mixed signal processing with off-the-shelf hardware and data-parallel processing using 128-bit registers found in desktop CPUs. Correlation algorithms used fractal data structures and multithreading to reduce data analysis times. Global fitting was implemented with robust minimization routines and provides feedback that allows the user to critically inspect initial guesses and fits. Restrictions: Data acquisition only requires a National Instruments data acquisition card (it was tested on Linux using card PCIe-6251) and a simple home-built circuit. Unusual features: Hand-coded ×86-64 assembly for data acquisition loops (platform-independent C code also provided). Additional comments: A complete collection of tools to perform Fluorescence Triple Correlation Spectroscopy-from data acquisition to two-tau correlation of large data sets, to model fitting. Running time: 1-5 h of data analysis per hour of data collected. Varies depending on data-acquisition length, time resolution, data density and number of cores used for correlation integrals.

  18. Using UAV's to Measure the Urban Boundary Layer

    NASA Astrophysics Data System (ADS)

    Jacob, R. L.; Sankaran, R.; Beckman, P. H.

    2015-12-01

    The urban boundary layer is one of the most poorly studied regions of the atmospheric boundary layer. Since a majority of the world's population now lives in urban areas, it is becoming a more important region to measure and model. The combination of relatively low-cost unmanned aerial vehicles and low-cost sensors can together provide a new instrument for measuring urban and other boundary layers. We have mounted a new sensor and compute platform called Waggle on an off-the-shelf XR8 octo-copter from 3DRobotics. Waggle consists of multiple sensors for measuring pressure, temperature and humidity as well as trace gases such as carbon monoxide, nitrogen dioxide, sulfur dioxide and ozone. A single board computer running Linux included in Waggle on the UAV allows in-situ processing and data storage. Communication of the data is through WiFi or 3G and the Waggle software can save the data in case communication is lost during flight. The flight pattern is a deliberately simple vertical ascent and descent over a fixed location to provide vertical profiles and so flights can be confined to urban parks, industrial areas or the footprint of a single rooftop. We will present results from test flights in urban and rural areas in and around Chicago.

  19. Rcount: simple and flexible RNA-Seq read counting.

    PubMed

    Schmid, Marc W; Grossniklaus, Ueli

    2015-02-01

    Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Poster — Thur Eve — 52: A Web-based Platform for Collaborative Document Management in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kildea, J.; Joseph, A.

    We describe DepDocs, a web-based platform that we have developed to manage the committee meetings, policies, procedures and other documents within our otherwise paperless radiotherapy clinic. DepDocs is essentially a document management system based on the popular Drupal content management software. For security and confidentiality, it is hosted on a linux server internal to our hospital network such that documents are never sent to the cloud or outside of the hospital firewall. We used Drupal's in-built role-based user rights management system to assign a role, and associated document editing rights, to each user. Documents are accessed for viewing using eithermore » a simple Google-like search or by generating a list of related documents from a taxonomy of categorization terms. Our system provides document revision tracking and an document review and approval mechanism for all official policies and procedures. Committee meeting schedules, agendas and minutes are maintained by committee chairs and are restricted to committee members. DepDocs has been operational within our department for over six months and has already 45 unique users and an archive of over 1000 documents, mostly policies and procedures. Documents are easily retrievable from the system using any web browser within our hospital's network.« less

  1. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  2. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-06-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.

  3. MoleCoolQt – a molecule viewer for charge-density research

    PubMed Central

    Hübschle, Christian B.; Dittrich, Birger

    2011-01-01

    MoleCoolQt is a molecule viewer for charge-density research. Features include the visualization of local atomic coordinate systems in multipole refinements based on the Hansen and Coppens formalism as implemented, for example, in the XD suite. Residual peaks and holes from XDfft are translated so that they appear close to the nearest atom of the asymmetric unit. Critical points from a topological analysis of the charge density can also be visualized. As in the program MolIso, color-mapped isosurfaces can be generated with a simple interface. Apart from its visualization features the program interactively helps in assigning local atomic coordinate systems and local symmetry, which can be automatically detected and altered. Dummy atoms – as sometimes required for local atomic coordinate systems – are calculated on demand; XD system files are updated after changes. When using the invariom database, potential scattering factor assignment problems can be resolved by the use of an interactive dialog. The following file formats are supported: XD, MoPro, SHELX, GAUSSIAN (com, FChk, cube), CIF and PDB. MoleCoolQt is written in C++ using the Qt4 library, has a user-friendly graphical user interface, and is available for several flavors of Linux, Windows and MacOS. PMID:22477783

  4. AIDE - Advanced Intrusion Detection Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cathy L.

    2013-04-28

    Would you like to know when someone has dropped an undesirable executable binary on our system? What about something less malicious such as a software installation by a user? What about the user who decides to install a newer version of mod_perl or PHP on your web server without letting you know beforehand? Or even something as simple as when an undocumented config file change is made by another member of the admin group? Do you even want to know about all the changes that happen on a daily basis on your server? The purpose of an intrusion detection systemmore » (IDS) is to detect unauthorized, possibly malicious activity. The purpose of a host-based IDS, or file integrity checker, is check for unauthorized changes to key system files, binaries, libraries, and directories on the system. AIDE is an Open Source file and directory integrity checker. AIDE will let you know when a file or directory has been added, deleted, modified. It is included with the Red Hat Enterprise 6. It is available for other Linux distros. This is a case study describing the process of configuring AIDE on an out of the box RHEL6 installation. Its goal is to illustrate the thinking and the process by which a useful AIDE configuration is built.« less

  5. Collective Intelligence: Aggregation of Information from Neighbors in a Guessing Game.

    PubMed

    Pérez, Toni; Zamora, Jordi; Eguíluz, Víctor M

    2016-01-01

    Complex systems show the capacity to aggregate information and to display coordinated activity. In the case of social systems the interaction of different individuals leads to the emergence of norms, trends in political positions, opinions, cultural traits, and even scientific progress. Examples of collective behavior can be observed in activities like the Wikipedia and Linux, where individuals aggregate their knowledge for the benefit of the community, and citizen science, where the potential of collectives to solve complex problems is exploited. Here, we conducted an online experiment to investigate the performance of a collective when solving a guessing problem in which each actor is endowed with partial information and placed as the nodes of an interaction network. We measure the performance of the collective in terms of the temporal evolution of the accuracy, finding no statistical difference in the performance for two classes of networks, regular lattices and random networks. We also determine that a Bayesian description captures the behavior pattern the individuals follow in aggregating information from neighbors to make decisions. In comparison with other simple decision models, the strategy followed by the players reveals a suboptimal performance of the collective. Our contribution provides the basis for the micro-macro connection between individual based descriptions and collective phenomena.

  6. Collective Intelligence: Aggregation of Information from Neighbors in a Guessing Game

    PubMed Central

    Pérez, Toni; Zamora, Jordi; Eguíluz, Víctor M.

    2016-01-01

    Complex systems show the capacity to aggregate information and to display coordinated activity. In the case of social systems the interaction of different individuals leads to the emergence of norms, trends in political positions, opinions, cultural traits, and even scientific progress. Examples of collective behavior can be observed in activities like the Wikipedia and Linux, where individuals aggregate their knowledge for the benefit of the community, and citizen science, where the potential of collectives to solve complex problems is exploited. Here, we conducted an online experiment to investigate the performance of a collective when solving a guessing problem in which each actor is endowed with partial information and placed as the nodes of an interaction network. We measure the performance of the collective in terms of the temporal evolution of the accuracy, finding no statistical difference in the performance for two classes of networks, regular lattices and random networks. We also determine that a Bayesian description captures the behavior pattern the individuals follow in aggregating information from neighbors to make decisions. In comparison with other simple decision models, the strategy followed by the players reveals a suboptimal performance of the collective. Our contribution provides the basis for the micro-macro connection between individual based descriptions and collective phenomena. PMID:27093274

  7. Biographer: web-based editing and rendering of SBGN compliant biochemical networks.

    PubMed

    Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas

    2013-06-01

    The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-independent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL

  8. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  9. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  10. Grid Computing Environment using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Alanis, Fransisco; Mahmood, Akhtar

    2003-10-01

    Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  11. TELICS—A Telescope Instrument Control System for Small/Medium Sized Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Srivastava, Mudit K.; Ramaprakash, A. N.; Burse, Mahesh P.; Chordia, Pravin A.; Chillal, Kalpesh S.; Mestry, Vilas B.; Das, Hillol K.; Kohok, Abhay A.

    2009-10-01

    For any modern astronomical observatory, it is essential to have an efficient interface between the telescope and its back-end instruments. However, for small and medium-sized observatories, this requirement is often limited by tight financial constraints. Therefore a simple yet versatile and low-cost control system is required for such observatories to minimize cost and effort. Here we report the development of a modern, multipurpose instrument control system TELICS (Telescope Instrument Control System) to integrate the controls of various instruments and devices mounted on the telescope. TELICS consists of an embedded hardware unit known as a common control unit (CCU) in combination with Linux-based data acquisition and user interface. The hardware of the CCU is built around the ATmega 128 microcontroller (Atmel Corp.) and is designed with a backplane, master-slave architecture. A Qt-based graphical user interface (GUI) has been developed and the back-end application software is based on C/C++. TELICS provides feedback mechanisms that give the operator good visibility and a quick-look display of the status and modes of instruments as well as data. TELICS has been used for regular science observations since 2008 March on the 2 m, f/10 IUCAA Telescope located at Girawali in Pune, India.

  12. Directed flux motor

    NASA Technical Reports Server (NTRS)

    Wilson, Andrew (Inventor); Punnoose, Andrew (Inventor); Strausser, Katherine (Inventor); Parikh, Neil (Inventor)

    2011-01-01

    A directed flux motor described utilizes the directed magnetic flux of at least one magnet through ferrous material to drive different planetary gear sets to achieve capabilities in six actuated shafts that are grouped three to a side of the motor. The flux motor also utilizes an interwoven magnet configuration which reduces the overall size of the motor. The motor allows for simple changes to modify the torque to speed ratio of the gearing contained within the motor as well as simple configurations for any number of output shafts up to six. The changes allow for improved manufacturability and reliability within the design.

  13. Simple Counts of ADL Dependencies Do Not Adequately Reflect Older Adults' Preferences toward States of Functional Impairment

    PubMed Central

    Sims, Tamara; Holmes, Tyson H.; Bravata, Dena M.; Garber, Alan M.; Nelson, Lorene M.; Goldstein, Mary K.

    2008-01-01

    Objective To use unweighted counts of dependencies in Activities of Daily Living (ADLs) to assess the impact of functional impairment requires an assumption of equal preferences for each ADL dependency. To test this assumption, we analyzed standard gamble utilities of single and combination ADL dependencies among older adults. Study Design and Setting: Four hundred older adults used multimedia software (FLAIR1) to report standard gamble utilities for their current health and hypothetical health states of dependency in each of 7 ADLs and 8 of 30 combinations of ADL dependencies. Results Utilities for health states of multiple ADL dependencies were often greater than for states of single ADL dependencies. Dependence in eating, the ADL dependency with the lowest utility rating of the single ADL dependencies, ranked lower than 7 combination states. Similarly, some combination states with fewer ADL dependencies had lower utilities than those with more ADL dependencies. These findings were consistent across groups by gender, age, and education. Conclusion Our results suggest that the count of ADL dependencies does not adequately represent the utility for a health state. Cost-effectiveness analyses and other evaluations of programs that prevent or treat functional dependency should apply utility-weights rather than relying on simple ADL counts. PMID:18722749

  14. Simple & Rapid Generation of Complex DNA Profiles for the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Kass, David H.

    2007-01-01

    Deoxyribonucleic acid (DNA) profiles can be generated by a variety of techniques incorporating different types of DNA markers. Simple methods are commonly utilized in the undergraduate laboratory, but with certain drawbacks. In this article, the author presents an advancement of the "Alu" dimorphism technique involving two tetraplex polymerase…

  15. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    ERIC Educational Resources Information Center

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  16. Integrated Approach to User Account Management

    NASA Technical Reports Server (NTRS)

    Kesselman, Glenn; Smith, William

    2007-01-01

    IT environments consist of both Windows and other platforms. Providing user account management for this model has become increasingly diffi cult. If Microsoft#s Active Directory could be enhanced to extend a W indows identity for authentication services for Unix, Linux, Java and Macintosh systems, then an integrated approach to user account manag ement could be realized.

  17. Neutron Scattering Template

    Science.gov Websites

    Scattering Banner Acknowledgements The graphics used on the Neutron Scattering Web Pages were designed by reused on these web pages by kind permission of Jack Carpenter, and with the assistance of Mary Koelbl (IPD). Rick Goyette (IPNS) set up and maintains the Linux web server as well as helping to automate the

  18. How to Use FASTLANEs to Protect IP Networks

    DTIC Science & Technology

    2006-08-18

    parameters exists in its QoS database , then determine which VRF table is assigned to the QoS request. Once an association between the requested QoS and a...Generation, 18 Oct 2002 [30] TMOK: mistvan, MPLS for Linux How-To, 2001. [31] Italy (Telecom Italia Lab S.p.A.) and UK (CCSR, University of

  19. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  20. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  1. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  2. A Web-Server of Cell Type Discrimination System

    PubMed Central

    Zhong, Yan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634

  3. Mining Bug Databases for Unidentified Software Vulnerabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Jason Wright

    2012-06-01

    Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, amore » significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.« less

  4. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    NASA Astrophysics Data System (ADS)

    Oh, S. H.; Kang, Y. W.; Byun, Y. I.

    2007-12-01

    We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512), KAF-1602E (15367times;1024), KAF-3200E (2184×1472) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  5. Manipulation of volumetric patient data in a distributed virtual reality environment.

    PubMed

    Dech, F; Ai, Z; Silverstein, J C

    2001-01-01

    Due to increases in network speed and bandwidth, distributed exploration of medical data in immersive Virtual Reality (VR) environments is becoming increasingly feasible. The volumetric display of radiological data in such environments presents a unique set of challenges. The shear size and complexity of the datasets involved not only make them difficult to transmit to remote sites, but these datasets also require extensive user interaction in order to make them understandable to the investigator and manageable to the rendering hardware. A sophisticated VR user interface is required in order for the clinician to focus on the aspects of the data that will provide educational and/or diagnostic insight. We will describe a software system of data acquisition, data display, Tele-Immersion, and data manipulation that supports interactive, collaborative investigation of large radiological datasets. The hardware required in this strategy is still at the high-end of the graphics workstation market. Future software ports to Linux and NT, along with the rapid development of PC graphics cards, open the possibility for later work with Linux or NT PCs and PC clusters.

  6. Integrated Design and Implementation of Embedded Control Systems with Scilab

    PubMed Central

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-01-01

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost. PMID:27873827

  7. Integrated Design and Implementation of Embedded Control Systems with Scilab.

    PubMed

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-09-05

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly timeconsuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.

  8. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  9. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  10. Operational resilience: concepts, design and analysis

    NASA Astrophysics Data System (ADS)

    Ganin, Alexander A.; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M.; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-01

    Building resilience into today’s complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks.

  11. KAnalyze: a fast versatile pipelined K-mer toolkit

    PubMed Central

    Audano, Peter; Vannberg, Fredrik

    2014-01-01

    Motivation: Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. Results: As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. Availability and implementation: KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/ Contact: fredrik.vannberg@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24642064

  12. KAnalyze: a fast versatile pipelined k-mer toolkit.

    PubMed

    Audano, Peter; Vannberg, Fredrik

    2014-07-15

    Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/. © The Author 2014. Published by Oxford University Press.

  13. Computational Support for Technology- Investment Decisions

    NASA Technical Reports Server (NTRS)

    Adumitroaie, Virgil; Hua, Hook; Lincoln, William; Block, Gary; Mrozinski, Joseph; Shelton, Kacie; Weisbin, Charles; Elfes, Alberto; Smith, Jeffrey

    2007-01-01

    Strategic Assessment of Risk and Technology (START) is a user-friendly computer program that assists human managers in making decisions regarding research-and-development investment portfolios in the presence of uncertainties and of non-technological constraints that include budgetary and time limits, restrictions related to infrastructure, and programmatic and institutional priorities. START facilitates quantitative analysis of technologies, capabilities, missions, scenarios and programs, and thereby enables the selection and scheduling of value-optimal development efforts. START incorporates features that, variously, perform or support a unique combination of functions, most of which are not systematically performed or supported by prior decision- support software. These functions include the following: Optimal portfolio selection using an expected-utility-based assessment of capabilities and technologies; Temporal investment recommendations; Distinctions between enhancing and enabling capabilities; Analysis of partial funding for enhancing capabilities; and Sensitivity and uncertainty analysis. START can run on almost any computing hardware, within Linux and related operating systems that include Mac OS X versions 10.3 and later, and can run in Windows under the Cygwin environment. START can be distributed in binary code form. START calls, as external libraries, several open-source software packages. Output is in Excel (.xls) file format.

  14. Operational resilience: concepts, design and analysis

    PubMed Central

    Ganin, Alexander A.; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M.; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-01

    Building resilience into today’s complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks. PMID:26782180

  15. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  16. Thirty Meter Telescope narrow-field infrared adaptive optics system real-time controller prototyping results

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Chapin, Edward L.; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-07-01

    Prototyping and benchmarking was performed for the Real-Time Controller (RTC) of the Narrow Field InfraRed Adaptive Optics System (NFIRAOS). To perform wavefront correction, NFIRAOS utilizes two deformable mirrors (DM) and one tip/tilt stage (TTS). The RTC receives wavefront information from six Laser Guide Star (LGS) Shack- Hartmann WaveFront Sensors (WFS), one high-order Natural Guide Star Pyramid WaveFront Sensor (PWFS) and multiple low-order instrument detectors. The RTC uses this information to determine the commands to send to the wavefront correctors. NFIRAOS is the first light AO system for the Thirty Meter Telescope (TMT). The prototyping was performed using dual-socket high performance Linux servers with the real-time (PREEMPT_RT) patch and demonstrated the viability of a commercial off-the-shelf (COTS) hardware approach to large scale AO reconstruction. In particular, a large custom matrix vector multiplication (MVM) was benchmarked which met the required latency requirements. In addition all major inter-machine communication was verified to be adequate using 10Gb and 40Gb Ethernet. The results of this prototyping has enabled a CPU-based NFIRAOS RTC design to proceed with confidence and that COTS hardware can be used to meet the demanding performance requirements.

  17. XOP: a multiplatform graphical user interface for synchrotron radiation spectral and optics calculations

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Dejus, Roger J.

    1997-11-01

    XOP (X-ray OPtics utilities) is a graphical user interface (GUI) created to execute several computer programs that calculate the basic information needed by a synchrotron beamline scientist (designer or experimentalist). Typical examples of such calculations are: insertion device (undulator or wiggler) spectral and angular distributions, mirror and multilayer reflectivities, and crystal diffraction profiles. All programs are provided to the user under a unified GUI, which greatly simplifies their execution. The XOP optics applications (especially mirror calculations) take their basic input (optical constants, compound and mixture tables) from a flexible file-oriented database, which allows the user to select data from a large number of choices and also to customize their own data sets. XOP includes many mathematical and visualization capabilities. It also permits the combination of reflectivities from several mirrors and filters, and their effect, onto a source spectrum. This feature is very useful when calculating thermal load on a series of optical elements. The XOP interface is written in the IDL (Interactive Data Language). An embedded version of XOP, which freely runs under most Unix platforms (HP, Sun, Dec, Linux, etc) and under Windows95 and NT, is available upon request.

  18. CrisprGE: a central hub of CRISPR/Cas-based genome editing.

    PubMed

    Kaur, Karambir; Tandon, Himani; Gupta, Amit Kumar; Kumar, Manoj

    2015-01-01

    CRISPR system is a powerful defense mechanism in bacteria and archaea to provide immunity against viruses. Recently, this process found a new application in intended targeting of the genomes. CRISPR-mediated genome editing is performed by two main components namely single guide RNA and Cas9 protein. Despite the enormous data generated in this area, there is a dearth of high throughput resource. Therefore, we have developed CrisprGE, a central hub of CRISPR/Cas-based genome editing. Presently, this database holds a total of 4680 entries of 223 unique genes from 32 model and other organisms. It encompasses information about the organism, gene, target gene sequences, genetic modification, modifications length, genome editing efficiency, cell line, assay, etc. This depository is developed using the open source LAMP (Linux Apache MYSQL PHP) server. User-friendly browsing, searching facility is integrated for easy data retrieval. It also includes useful tools like BLAST CrisprGE, BLAST NTdb and CRISPR Mapper. Considering potential utilities of CRISPR in the vast area of biology and therapeutics, we foresee this platform as an assistance to accelerate research in the burgeoning field of genome engineering. © The Author(s) 2015. Published by Oxford University Press.

  19. A methodology for achieving high-speed rates for artificial conductance injection in electrically excitable biological cells.

    PubMed

    Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C

    2001-12-01

    We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.

  20. Operational resilience: concepts, design and analysis.

    PubMed

    Ganin, Alexander A; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-19

    Building resilience into today's complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks.

  1. Creative Energy Management Can Save Money.

    ERIC Educational Resources Information Center

    Rose, Patricia

    1984-01-01

    Schools can launch energy conservation programs with simple money-saving measures like improving boiler maintenance, recalibrating utility meters, and obtaining preferred utility rates. Becoming more assertive in the marketplace and using "creative financing" when needed, they can then reinvest their savings in more extensive projects. (MCG)

  2. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  3. Towards a new Mercator Observatory Control System

    NASA Astrophysics Data System (ADS)

    Pessemier, W.; Raskin, G.; Prins, S.; Saey, P.; Merges, F.; Padilla, J. P.; Van Winckel, H.; Waelkens, C.

    2010-07-01

    A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.

  4. A Fault-Oblivious Extreme-Scale Execution Environment (FOX)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Hensbergen, Eric; Speight, William; Xenidis, Jimi

    IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: • collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes • operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS • IBM Research also participated in an advisory capacity with themore » Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating Systems/Runtime program.« less

  5. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  6. Space Communications Emulation Facility

    NASA Technical Reports Server (NTRS)

    Hill, Chante A.

    2004-01-01

    Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.

  7. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less

  8. Corn residue utilization by livestock in the USA

    USDA-ARS?s Scientific Manuscript database

    Corn (Zea mays L.) residue grazing or harvest provides a simple and economical practice to integrate crops and livestock. Limited information is available on how widespread corn residue utilization is practiced by US producers. In 2010, the USDA-ERS surveyed producers from 19 states on corn grain ...

  9. The Productivity Dilemma in Workplace Health Promotion.

    PubMed

    Cherniack, Martin

    2015-01-01

    Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.

  10. Dearomative dihydroxylation with arenophiles

    NASA Astrophysics Data System (ADS)

    Southgate, Emma H.; Pospech, Jola; Fu, Junkai; Holycross, Daniel R.; Sarlah, David

    2016-10-01

    Aromatic hydrocarbons are some of the most elementary feedstock chemicals, produced annually on a million metric ton scale, and are used in the production of polymers, paints, agrochemicals and pharmaceuticals. Dearomatization reactions convert simple, readily available arenes into more complex molecules with broader potential utility, however, despite substantial progress and achievements in this field, there are relatively few methods for the dearomatization of simple arenes that also selectively introduce functionality. Here we describe a new dearomatization process that involves visible-light activation of small heteroatom-containing organic molecules—arenophiles—that results in their para-cycloaddition with a variety of aromatic compounds. The approach uses N-N-arenophiles to enable dearomative dihydroxylation and diaminodihydroxylation of simple arenes. This strategy provides direct and selective access to highly functionalized cyclohexenes and cyclohexadienes and is orthogonal to existing chemical and biological dearomatization processes. Finally, we demonstrate the synthetic utility of this strategy with the concise synthesis of several biologically active compounds and natural products.

  11. Simple Screening Test for Exercise-Induced Bronchospasm in the Middle School Athlete

    ERIC Educational Resources Information Center

    Weiss, Tyler J.; Baker, Rachel H.; Weiss, Jason B.; Weiss, Michelle M.

    2013-01-01

    This article recommends and provides results from a simple screening test that could be incorporated into a standardized school evaluation for all children participating in sports and physical education classes. The test can be employed by physical educators utilizing their own gym to identify children who demonstrate signs of exercise-induced…

  12. The TSO Logic and G2 Software Product

    NASA Technical Reports Server (NTRS)

    Davis, Derrick D.

    2014-01-01

    This internship assignment for spring 2014 was at John F. Kennedy Space Center (KSC), in NASAs Engineering and Technology (NE) group in support of the Control and Data Systems Division (NE-C) within the Systems Hardware Engineering Branch. (NEC-4) The primary focus was in system integration and benchmarking utilizing two separate computer software products. The first half of this 2014 internship is spent in assisting NE-C4s Electronics and Embedded Systems Engineer, Kelvin Ruiz and fellow intern Scott Ditto with the evaluation of a newly piece of software, called G2. Its developed by the Gensym Corporation and introduced to the group as a tool used in monitoring launch environments. All fellow interns and employees of the G2 group have been working together in order to better understand the significance of the G2 application and how KSC can benefit from its capabilities. The second stage of this Spring project is to assist with an ongoing integration of a benchmarking tool, developed by a group of engineers from a Canadian based organization known as TSO Logic. Guided by NE-C4s Computer Engineer, Allen Villorin, NASA 2014 interns put forth great effort in helping to integrate TSOs software into the Spaceport Processing Systems Development Laboratory (SPSDL) for further testing and evaluating. The TSO Logic group claims that their software is designed for, monitoring and reducing energy consumption at in-house server farms and large data centers, allows data centers to control the power state of servers, without impacting availability or performance and without changes to infrastructure and the focus of the assignment is to test this theory. TSOs Aaron Rallo Founder and CEO, and Chris Tivel CTO, both came to KSC to assist with the installation of their software in the SPSDL laboratory. TSOs software is installed onto 24 individual workstations running three different operating systems. The workstations were divided into three groups of 8 with each group having its own operating system. The first group is comprised of Ubuntus Debian -based Linux the second group is windows 7 Professional and the third group ran Red Hat Linux. The highlight of this portion of the assignment is to compose documentation expressing the overall impression of the software and its capabilities.

  13. CASTOR: Widely Distributed Scalable Infospaces

    DTIC Science & Technology

    2008-11-01

    1  i Progress against Planned Objectives Enable nimble apps that react fast as...generation of scalable, reliable, ultra- fast event notification in Linux data centers. • Maelstrom, a spin-off from Ricochet, offers a powerful new option...out potential enhancements to WS-EVENTING and WS-NOTIFICATION based on our work. Potential impact for the warflighter. QSM achieves extremely fast

  14. Project Golden Gate: towards real-time Java in space missions

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel; Bollella, Greg; Canham, Tim; Carson, Vanessa; Champlin, Virgil; Giovannoni, Brian; Indictor, Mark; Meyer, Kenny; Murray, Alex; Reinholtz, Kirk

    2004-01-01

    This paper describes the problem domain and our experimentation with the first commercial implementation of the Real Time Specification for Java. The two main issues explored in this report are: (1) the effect of RTSJ's non-heap memory on the programming model, and (2) performance benchmarking of RTSJ/Linux relative to C++/VxWorks.

  15. VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)

    NASA Astrophysics Data System (ADS)

    Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.

    2017-08-01

    Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).

  16. Peregrine System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y

  17. Running GUI Applications on Peregrine from OSX | High-Performance Computing

    Science.gov Websites

    Learn how to use Virtual Network Computing to access a Linux graphical desktop environment on Peregrine local port (on, e.g., your laptop), starts a VNC server process that manages a virtual desktop on your virtual desktop. This is persistent, so remember it-you will use this password whenever accessing

  18. Power Monitoring Using the Raspberry Pi

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    2014-01-01

    The Raspberry Pi is a credit card size low powered compute board with Ethernet connection, HDMI video output, audio, full Linux operating system run from an SD card, and more, all for $45. With cables, SD card, etc., the cost is about $70. Originally designed to help teach computer science principles to low income children and students, the Pi has…

  19. Web-Based Search and Plot System for Nuclear Reaction Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otuka, N.; Nakagawa, T.; Fukahori, T.

    2005-05-24

    A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented.

  20. Bridging the Arts and Computer Science: Engaging At-Risk Students through the Integration of Music

    ERIC Educational Resources Information Center

    Moyer, Lisa; Klopfer, Michelle; Ernst, Jeremy V.

    2018-01-01

    Linux Laptop Orchestra (L2Ork), founded in 2009 in the Virginia Tech Music Department's Digital and Interactive Sound & Intermedia Studio, "explores the power of gesture, communal interaction, and the multidimensionality of arts, as well as technology's potential to seamlessly integrate arts and sciences with particular focus on K-12…

  1. Allinea Parallel Profiling and Debugging Tools on the Peregrine System |

    Science.gov Websites

    client for your platform. (Mac/Windows/Linux) Configuration to connect to Peregrine: Open the Allinea view it # directly through x11 forwarding just type 'map', # it will open a GUI. $ map # to profile an enable x-forwarding when connecting to # Peregrine. $ map # This will open the GUI Debugging using

  2. An Experimental Exploration of the Impact of Sensor-Level Packet Loss on Network Intrusion Detection

    DTIC Science & Technology

    2015-07-01

    Our observation of the graph reveals that this is most likely a nonlinear relationship resembling a sigmoid function (see Fig. 10). Spiess and...2014. 16. Usleep - sleep some number of microseconds. In: Chapter 3 of the Linux Pro- grammerś Manual; Raleigh (NC): Red Hat, Inc.; 2014. 17. Spiess

  3. Protocell design through modular compartmentalization

    PubMed Central

    Miller, David; Booth, Paula J.; Seddon, John M.; Templer, Richard H.; Law, Robert V.; Woscholski, Rudiger; Ces, Oscar; Barter, Laura M. C.

    2013-01-01

    De novo synthetic biological design has the potential to significantly impact upon applications such as energy generation and nanofabrication. Current designs for constructing organisms from component parts are typically limited in scope, as they utilize a cut-and-paste ideology to create simple stepwise engineered protein-signalling pathways. We propose the addition of a new design element that segregates components into lipid-bound ‘proto-organelles’, which are interfaced with response elements and housed within a synthetic protocell. This design is inspired by living cells, which utilize multiple types of signalling molecules to facilitate communication between isolated compartments. This paper presents our design and validation of the components required for a simple multi-compartment protocell machine, for coupling a light transducer to a gene expression system. This represents a general design concept for the compartmentalization of different types of artificial cellular machinery and the utilization of non-protein signal molecules for signal transduction. PMID:23925982

  4. Protocell design through modular compartmentalization.

    PubMed

    Miller, David; Booth, Paula J; Seddon, John M; Templer, Richard H; Law, Robert V; Woscholski, Rudiger; Ces, Oscar; Barter, Laura M C

    2013-10-06

    De novo synthetic biological design has the potential to significantly impact upon applications such as energy generation and nanofabrication. Current designs for constructing organisms from component parts are typically limited in scope, as they utilize a cut-and-paste ideology to create simple stepwise engineered protein-signalling pathways. We propose the addition of a new design element that segregates components into lipid-bound 'proto-organelles', which are interfaced with response elements and housed within a synthetic protocell. This design is inspired by living cells, which utilize multiple types of signalling molecules to facilitate communication between isolated compartments. This paper presents our design and validation of the components required for a simple multi-compartment protocell machine, for coupling a light transducer to a gene expression system. This represents a general design concept for the compartmentalization of different types of artificial cellular machinery and the utilization of non-protein signal molecules for signal transduction.

  5. A new approach to global control of redundant manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    A new and simple approach to configuration control of redundant manipulators is presented. In this approach, the redundancy is utilized to control the manipulator configuration directly in task space, where the task will be performed. A number of kinematic functions are defined to reflect the desirable configuration that will be achieved for a given end-effector position. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. An adaptive scheme is then utilized to globally control the configuration variables so as to achieve tracking of some desired reference trajectories. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The control law is simple and computationally very fast, and does not require the complex manipulator dynamic model.

  6. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  7. Levelized Cost of Energy Calculator | Energy Analysis | NREL

    Science.gov Websites

    Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls

  8. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  9. Runwien: a text-based interface for the WIEN package

    NASA Astrophysics Data System (ADS)

    Otero de la Roza, A.; Luaña, Víctor

    2009-05-01

    A new text-based interface for WIEN2k, the full-potential linearized augmented plane-waves (FPLAPW) program, is presented. This code provides an easy to use, yet powerful way of generating arbitrarily large sets of calculations. Thus, properties over a potential energy surface and WIEN2k parameter exploration can be calculated using a simple input text file. This interface also provides new capabilities to the WIEN2k package, such as the calculation of elastic constants on hexagonal systems or the automatic gathering of relevant information. Additionally, runwien is modular, flexible and intuitive. Program summaryProgram title: runwien Catalogue identifier: AECM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 3 No. of lines in distributed program, including test data, etc.: 62 567 No. of bytes in distributed program, including test data, etc.: 610 973 Distribution format: tar.gz Programming language: gawk (with locale POSIX or similar) Computer: All running Unix, Linux Operating system: Unix, GNU/Linux Classification: 7.3 External routines: WIEN2k ( http://www.wien2k.at/), GAWK ( http://www.gnu.org/software/gawk/), rename by L. Wall, a Perl script which renames files, modified by R. Barker to check for the existence of target files, gnuplot ( http://www.gnuplot.info/) Subprograms used:Cat Id: ADSY_v1_0/AECB_v1_0, Title: GIBBS/CRITIC, Reference: CPC 158 (2004) 57/CPC 999 (2009) 999 Nature of problem: Creation of a text-based, batch-oriented interface for the WIEN2k package. Solution method: WIEN2k solves the Kohn-Sham equations of a solid using the FPLAPW formalism. Runwien interprets an input file containing the description of the geometry and structure of the solid and drives the execution of the WIEN2k programs. The input is simplified thanks to the default values of the WIEN2k parameters known to runwien. Additional comments: Designed for WIEN2k versions 06.4, 07.2, 08.2, and 08.3. Running time: For the test case (TiC), a single geometry takes 5 to 10 minutes on a typical desktop PC (Intel Pentium 4, 3.4 GHz, 1 GB RAM). The full example including the calculation of the elastic constants and the equation of state, takes 9 hours and 32 minutes.

  10. Scheduling in the Face of Uncertain Resource Consumption and Utility

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Dearden, Richard

    2003-01-01

    We discuss the problem of scheduling tasks that consume uncertain amounts of a resource with known capacity and where the tasks have uncertain utility. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We show that the problems are NP- complete, and present some results that characterize the behavior of some simple heuristics over a variety of problem classes.

  11. Preparation of isotopic molybdenum foils utilizing small quantities of material

    NASA Astrophysics Data System (ADS)

    Lipski, A. R.; Lee, L. L.; Liang, J. F.; Mahon, J. C.

    1993-09-01

    A simple method utilizing a small amount of isotopic material for production of molybdenum foils is discussed. An e-gun is used in the procedure. The Mo powder undergoes reduction-sintering and melting-solidifying steps leading to the creation of a metallic droplet suitable for further cold rolling or vacuum deposition.

  12. 45 CFR 1356.71 - Federal review of the eligibility of children in foster care and the eligibility of foster care...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... by ACF statistical staff from the Adoption and Foster Care Analysis and Reporting System (AFCARS... primary review utilizing probability sampling methodologies. Usually, the chosen methodology will be simple random sampling, but other probability samples may be utilized, when necessary and appropriate. (3...

  13. The Effective Utilization of an Instructional Film in a Learning Sequence

    ERIC Educational Resources Information Center

    Thomas, Ian D.

    1973-01-01

    Summarizes some of the main research findings relating to the utilization of instructional films in classroom situations. Reports the confirmation of a number of these findings in a survey conducted to investigate the teaching and learning practices used in acquiring the ability to solve simple problems using Coulomb's Law. (Author/JR)

  14. A discrete-time adaptive control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  15. Phased Retrofits in Existing Homes in Florida Phase II. Shallow Plus Retrofits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. Sutherland; Parker, D.; Martin, E.

    2016-02-01

    The BAPIRC team and Florida Power and Light (FPL) electric utility pursued a pilot phased energy-efficiency retrofit program in Florida by creating detailed data on the energy and economic performance of two levels of retrofit - simple and deep. For this Phased Deep Retrofit (PDR) project, a total of 56 homes spread across the utility partner's territory in east central Florida, southeast Florida, and southwest Florida were instrumented between August 2012 and January 2013, and received simple pass-through retrofit measures during the period of March 2013 - June 2013. Ten of these homes received a deeper package of retrofits duringmore » August 2013 - December 2013.« less

  16. A survey on the utilization of disposable quadrant articulators.

    PubMed

    Thornton, Linda J

    2002-01-01

    This study evaluated utilization of the six most popular disposable quadrant articulators presently used in crown fabrication. Most general practitioners and crown and bridge laboratory technicians reported that they frequently or always used disposable quadrant articulators for single units and simple three-unit fixed partial dentures. All evaluated articulators demonstrated an inability to maintain and repeat the centric stop position. The disposable quadrant articulator, used in combination with dual-arch impressions, is a viable method for fabricating single and short-span restorations. While a nonrepeatable centric stop was present in all six articulators evaluated, the method was considered a time-efficient and cost-saving alternative to traditional all-metal simple-hinge articulators.

  17. Remarks on forensically interesting Sony Playstation 3 console features

    NASA Astrophysics Data System (ADS)

    Daugs, Gunnar; Kröger, Knut; Creutzburg, Reiner

    2012-02-01

    This paper deals with forensically interesting features of the Sony Playstation 3 game console. The construction and the internal structure are analyzed more precisely. Interesting forensic features of the operating system and the file system are presented. Differences between a PS3 with and without jailbreak are introduced and possible forensic attempts when using an installed Linux are discussed.

  18. Automated Classification of Power Signals

    DTIC Science & Technology

    2008-06-01

    determine when a transient occurs. The identification of this signal can then be determined by an expert classifier and a series of these...the manual identification and classification of system events. Once events were located, the characteristics were examined to determine if system... identification code, which varies depending on the system classifier that is specified. Figure 3-7 provides an example of a Linux directory containing

  19. Setting up a Low-Cost Lab Management System for a Multi-Purpose Computing Laboratory Using Virtualisation Technology

    ERIC Educational Resources Information Center

    Mok, Heng Ngee; Lee, Yeow Leong; Tan, Wee Kiat

    2012-01-01

    This paper describes how a generic computer laboratory equipped with 52 workstations is set up for teaching IT-related courses and other general purpose usage. The authors have successfully constructed a lab management system based on decentralised, client-side software virtualisation technology using Linux and free software tools from VMware that…

  20. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  1. Computers in Libraries, 2000: Proceedings (15th, Washington, D.C., March 15-17, 2000).

    ERIC Educational Resources Information Center

    Nixon, Carol, Comp.; Burmood, Jennifer, Comp.

    Topics of the Proceedings of the 15th Annual Computers in Libraries Conference (March 15-17, 2000) include: Linux and open source software in an academic library; a Master Trainer Program; what educators need to know about multimedia and copyright; how super searchers find business information online; managing print costs; new technologies in wide…

  2. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  3. A Prototype Implementation of a Time Interval File Protection System in Linux

    DTIC Science & Technology

    2006-09-01

    when a user logs in, the /etc/ passwd file is read by the system to get the user’s home directory. The user’s login shell then changes the directory...and don. • Users can be added with the command: # useradd – m <username> • Set the password by: # passwd <username> • Make a copy of the

  4. Linux Makes the Grade: An Open Source Solution That's Time Has Come

    ERIC Educational Resources Information Center

    Houston, Melissa

    2007-01-01

    In 2001, Indiana officials at the Department of Education were taking stock. The schools had an excellent network infrastructure and had installed significant numbers of computers for 1 million public school enrollees. Yet students were spending less than an hour a week on the computer. It was then that state officials knew each student needed a…

  5. Spare a Little Change? Towards a 5-Nines Internet in 250 Lines of Code

    DTIC Science & Technology

    2011-05-01

    NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie Mellon University,School of Computer Science,Pittsburgh,PA,15213 8. PERFORMING ...Std Z39-18 Keywords: Internet reliability, BGP performance , Quagga This document includes excerpts of the source code for the Linux operating system...Behavior and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . .  .. Other Related Work

  6. 67 FR 24495 - United States v. Microsoft Corporation; Public Comments; Notice (MTC-00003461 - MTC-00007629)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2002-05-03

    ... (web-serving software), Linux, Perl, and those who are building a compatible & free version of MS`s..., Argument from Design Argument from Design-Web & Multimedia [email protected] http://www.ardes.com MTC-00003464... organization could be a good target for this effort. Their web address is http:// www.gnu.org/. This effort...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Robert; Rivers, Wilmer

    any single computer program for seismic data analysis will not have all the capabilities needed to study reference events, since hese detailed studies will be highly specialized. It may be necessary to develop and test new algorithms, and then these special ;odes must be integrated with existing software to use their conventional data-processing routines. We have investigated two neans of establishing communications between the legacy and new codes: CORBA and XML/SOAP Web services. We have nvestigated making new Java code communicate with a legacy C-language program, geotool, running under Linux. Both methods vere successful, but both were difficult to implement.more » C programs on UNIX/Linux are poorly supported for Web services, compared vith the Java and .NET languages and platforms. Easier-to-use middleware will be required for scientists to construct distributed applications as easily as stand-alone ones. Considerable difficulty was encountered in modifying geotool, and this problem shows he need to use component-based user interfaces instead of large C-language codes where changes to one part of the program nay introduce side effects into other parts. We have nevertheless made bug fixes and enhancements to that legacy program, but t remains difficult to expand it through communications with external software.« less

  8. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  9. ARCHANGEL: Galaxy Photometry System

    NASA Astrophysics Data System (ADS)

    Schombert, James

    2011-07-01

    ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes. The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets. ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.

  10. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    PubMed

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  12. Cloudy's Journey from FORTRAN to C, Why and How

    NASA Astrophysics Data System (ADS)

    Ferland, G. J.

    Cloudy is a large-scale plasma simulation code that is widely used across the astronomical community as an aid in the interpretation of spectroscopic data. The cover of the ADAS VI book featured predictions of the code. The FORTRAN 77 source code has always been freely available on the Internet, contributing to its widespread use. The coming of PCs and Linux has fundamentally changed the computing environment. Modern Fortran compilers (F90 and F95) are not freely available. A common-use code must be written in either FORTRAN 77 or C to be Open Source/GNU/Linux friendly. F77 has serious drawbacks - modern language constructs cannot be used, students do not have skills in this language, and it does not contribute to their future employability. It became clear that the code would have to be ported to C to have a viable future. I describe the approach I used to convert Cloudy from FORTRAN 77 with MILSPEC extensions to ANSI/ISO 89 C. Cloudy is now openly available as a C code, and will evolve to C++ as gcc and standard C++ mature. Cloudy looks to a bright future with a modern language.

  13. Data Collection with Linux in the Undergraduate Physics Lab

    NASA Astrophysics Data System (ADS)

    Ramey, R. Dwayne

    2004-11-01

    Electronic data devices such as photogates can greatly facilitate data collection in the undergraduate physics laboratory. Unfortunately, these devices have several practical drawbacks. While the photogates themselves are not particularly expensive, manufacturers of these devices have created intermediary hardware devices for data buffering and manipulation. These devices, while useful in some contexts, greatly increase the overall price of data collection and, through the use of proprietary software, limit the ability of the enduser to customize the software. As an alternative, I outline the procedure for establishing a computer-based data collection system that consists of opensource software and user constructed connections. The data collection system consists of the wiring needed to connect a data device to a computer and the software needed to collect and manipulate data. Data devices can be connected to a computer through either through the USB port or the gameport of a sound card. Software capable of collecting and manipulating the data from a photogate type device on a Linux system has been developed and will be discrussed. Results for typical undergraduate photogate based experiments will be shown, error limits and data collect rates will be discussed for both the gameport and USB connections.

  14. Digital Plasma Control System for Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Ferrara, M.; Wolfe, S.; Stillerman, J.; Fredian, T.; Hutchinson, I.

    2004-11-01

    A digital plasma control system (DPCS) has been designed to replace the present C-Mod system, which is based on hybrid analog-digital computer. The initial implementation of DPCS comprises two 64 channel, 16 bit, low-latency cPCI digitizers, each with 16 analog outputs, controlled by a rack-mounted single-processor Linux server, which also serves as the compute engine. A prototype system employing three older 32 channel digitizers was tested during the 2003-04 campaign. The hybrid's linear PID feedback system was emulated by IDL code executing a synchronous loop, using the same target waveforms and control parameters. Reliable real-time operation was accomplished under a standard Linux OS (RH9) by locking memory and disabling interrupts during the plasma pulse. The DPCS-computed outputs agreed to within a few percent with those produced by the hybrid system, except for discrepancies due to offsets and non-ideal behavior of the hybrid circuitry. The system operated reliably, with no sample loss, at more than twice the 10kHz design specification, providing extra time for implementing more advanced control algorithms. The code is fault-tolerant and produces consistent output waveforms even with 10% sample loss.

  15. MCR Container Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less

  16. HYBRIDCHECK: software for the rapid detection, visualization and dating of recombinant regions in genome sequence data.

    PubMed

    Ward, Ben J; van Oosterhout, Cock

    2016-03-01

    HYBRIDCHECK is a software package to visualize the recombination signal in large DNA sequence data set, and it can be used to analyse recombination, genetic introgression, hybridization and horizontal gene transfer. It can scan large (multiple kb) contigs and whole-genome sequences of three or more individuals. HYBRIDCHECK is written in the r software for OS X, Linux and Windows operating systems, and it has a simple graphical user interface. In addition, the r code can be readily incorporated in scripts and analysis pipelines. HYBRIDCHECK implements several ABBA-BABA tests and visualizes the effects of hybridization and the resulting mosaic-like genome structure in high-density graphics. The package also reports the following: (i) the breakpoint positions, (ii) the number of mutations in each introgressed block, (iii) the probability that the identified region is not caused by recombination and (iv) the estimated age of each recombination event. The divergence times between the donor and recombinant sequence are calculated using a JC, K80, F81, HKY or GTR correction, and the dating algorithm is exceedingly fast. By estimating the coalescence time of introgressed blocks, it is possible to distinguish between hybridization and incomplete lineage sorting. HYBRIDCHECK is libré software and it and its manual are free to download from http://ward9250.github.io/HybridCheck/. © 2015 John Wiley & Sons Ltd.

  17. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  18. Biographer: web-based editing and rendering of SBGN compliant biochemical networks

    PubMed Central

    Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas

    2013-01-01

    Motivation: The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. Results: We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. Availability: The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-indepenent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL. Contact: edda.klipp@biologie.hu-berlin.de or handorf@physik.hu-berlin.de PMID:23574737

  19. The SAMI2 Open Source Project

    NASA Astrophysics Data System (ADS)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  20. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  1. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  2. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  3. Demonstrating Principles of Spectrophotometry by Constructing a Simple, Low-Cost, Functional Spectrophotometer Utilizing the Light Sensor on a Smartphone

    ERIC Educational Resources Information Center

    Hosker, Bill S.

    2018-01-01

    A highly simplified variation on the do-it-yourself spectrophotometer using a smartphone's light sensor as a detector and an app to calculate and display absorbance values was constructed and tested. This simple version requires no need for electronic components or postmeasurement spectral analysis. Calibration graphs constructed from two…

  4. The Effect of Combining Analogy-Based Simulation and Laboratory Activities on Turkish Elementary School Students' Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Unlu, Zeynep Koyunlu; Dokme, Ibilge

    2011-01-01

    The purpose of this study was to investigate whether the combination of both analogy-based simulation and laboratory activities as a teaching tool was more effective than utilizing them separately in teaching the concepts of simple electricity. The quasi-experimental design that involved 66 seventh grade students from urban Turkish elementary…

  5. A Latent Variable Approach to the Simple View of Reading

    ERIC Educational Resources Information Center

    Kershaw, Sarah; Schatschneider, Chris

    2012-01-01

    The present study utilized a latent variable modeling approach to examine the Simple View of Reading in a sample of students from 3rd, 7th, and 10th grades (N = 215, 188, and 180, respectively). Latent interaction modeling and other latent variable models were employed to investigate (a) the functional form of the relationship between decoding and…

  6. A Simple Method to Determine the "R" or "S" Configuration of Molecules with an Axis of Chirality

    ERIC Educational Resources Information Center

    Wang, Cunde; Wu, Weiming

    2011-01-01

    A simple method for the "R" or "S" designation of molecules with an axis of chirality is described. The method involves projection of the substituents along the chiral axis, utilizes the Cahn-Ingold-Prelog sequence rules in assigning priority to the substituents, is easy to use, and has broad applicability. (Contains 5 figures.)

  7. Novel and simple route to fabricate fully biocompatible plasmonic mushroom arrays adhered on silk biopolymer

    NASA Astrophysics Data System (ADS)

    Park, Joonhan; Choi, Yunkyoung; Lee, Myungjae; Jeon, Heonsu; Kim, Sunghwan

    2014-12-01

    A fully biocompatible plasmonic quasi-3D nanostructure is demonstrated by a simple and reliable fabrication method using strong adhesion between gold and silk fibroin. The quasi-3D nature gives rise to complex photonic responses in reflectance that are prospectively useful in bio/chemical sensing applications. Laser interference lithography is utilized to fabricate large-area plasmonic nanostructures.A fully biocompatible plasmonic quasi-3D nanostructure is demonstrated by a simple and reliable fabrication method using strong adhesion between gold and silk fibroin. The quasi-3D nature gives rise to complex photonic responses in reflectance that are prospectively useful in bio/chemical sensing applications. Laser interference lithography is utilized to fabricate large-area plasmonic nanostructures. Electronic supplementary information (ESI) available: The incident angle dependence of reflectance spectra and the atomic force microscopy image of the Au nanoparticle array on a silk film after 1 hour of ultrasonication. See DOI: 10.1039/c4nr05172f

  8. The Productivity Dilemma in Workplace Health Promotion

    PubMed Central

    Cherniack, Martin

    2015-01-01

    Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374

  9. Implementation of Basic and Universal Gates In a single Circuit Based On Quantum-dot Cellular Automata Using Multi-Layer Crossbar Wire

    NASA Astrophysics Data System (ADS)

    Bhowmik, Dhrubajyoti; Saha, Apu Kr; Dutta, Paramartha; Nandi, Supratim

    2017-08-01

    Quantum-dot Cellular Automata (QCA) is one of the most substitutes developing nanotechnologies for electronic circuits, as a result of lower force utilization, higher speed and smaller size in correlation with CMOS innovation. The essential devices, a Quantum-dot cell can be utilized to logic gates and wires. As it is the key building block on nanotechnology circuits. By applying simple gates, the hardware requirements for a QCA circuit can be decreased and circuits can be less complex as far as level, delay and cell check. This article exhibits an unobtrusive methodology for actualizing novel upgraded simple and universal gates, which can be connected to outline numerous variations of complex QCA circuits. Proposed gates are straightforward in structure and capable as far as implementing any digital circuits. The main aim is to build all basic and universal gates in a simple circuit with and without crossbar-wire. Simulation results and physical relations affirm its handiness in actualizing each advanced circuit.

  10. The HARM score for gastrointestinal surgery: Application and validation of a novel, reliable and simple tool to measure surgical quality and outcomes.

    PubMed

    Crawshaw, Benjamin P; Keller, Deborah S; Brady, Justin T; Augestad, Knut M; Schiltz, Nicholas K; Koroukian, Siran M; Navale, Suparna M; Steele, Scott R; Delaney, Conor P

    2017-03-01

    The HospitAl length of stay, Readmissions and Mortality (HARM) score is a simple, inexpensive quality tool, linked directly to patient outcomes. We assess the HARM score for measuring surgical quality across multiple surgical populations. Upper gastrointestinal, hepatobiliary, and colorectal surgery cases between 2005 and 2009 were identified from the Healthcare Cost and Utilization Project California State Inpatient Database. Composite and individual HARM scores were calculated from length of stay, 30-day readmission and mortality, correlated to complication rates for each hospital and stratified by operative type. 71,419 admissions were analyzed. Higher HARM scores correlated with higher complication rates for all cases after risk adjustment and stratification by operation type, elective or emergent status. The HARM score is a simple and valid quality measurement for upper gastrointestinal, hepatobiliary and colorectal surgery. The HARM score could facilitate benchmarking to improve patient outcomes and resource utilization, and may facilitate outcome improvement. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The predictive validity of prospect theory versus expected utility in health utility measurement.

    PubMed

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  12. Fiber stress values for design of glulam timber utility structures

    Treesearch

    R. Hernandez; R. C. Moody; R. H. Falk

    In this study, we developed a simple equation to calculate average fiber stress values for design of glued-laminated (glulam) timber utility structures as a function of design bending stress. We took design stress in bending values specified by the American Institute of Timber Construction (AITC) for various combinations of glulam timber, applied appropriate end-use...

  13. The impact of Pulpwood Rail Freight Costs on the Minnesota-Wisconsin Pulpwood Market

    Treesearch

    David C. Lothner

    1976-01-01

    Transportation costs affect the marketing and utilization of pulpwood. Their impact on the procurement and utilization of pulpwood often prove difficult to measure because deriving an average annual measure of the transportation cost is difficult. This note, by means of a simple index method for measuring regional interstate pulpwood rail freight costs, illustrates...

  14. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  15. A quantitative model of application slow-down in multi-resource shared systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Kim, Youngjae

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  16. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  17. Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.

    PubMed

    Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M

    2015-11-01

    Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.

  18. A Dynamic Clamp on Every Rig

    PubMed Central

    2017-01-01

    Abstract The dynamic clamp should be a standard part of every cellular electrophysiologist’s toolbox. That it is not, even 25 years after its introduction, comes down to three issues: money, the disruption that adding dynamic clamp to an existing electrophysiology rig entails, and the technical prowess required of experimenters. These have been valid and limiting issues in the past, but no longer. Technological advances associated with the so-called maker movement render them moot. We demonstrate this by implementing a fast (∼100 kHz) dynamic clamp system using an inexpensive microcontroller (Teensy 3.6). The overall cost of the system is less than USD$100, and assembling it requires no prior electronics experience. Modifying it—for example, to add Hodgkin–Huxley-style conductances—requires no prior programming experience. The system works together with existing electrophysiology data acquisition systems (for Macintosh, Windows, and Linux); it does not attempt to supplant them. Moreover, the process of assembling, modifying, and using the system constitutes a useful pedagogical exercise for students and researchers with no background but an interest in electronics and programming. We demonstrate the system’s utility by implementing conductances as fast as a transient sodium conductance and as complex as the Ornstein–Uhlenbeck conductances of the “point conductance” model of synaptic background activity. PMID:29085905

  19. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  20. siRNAmod: A database of experimentally validated chemically modified siRNAs.

    PubMed

    Dar, Showkat Ahmad; Thakur, Anamika; Qureshi, Abid; Kumar, Manoj

    2016-01-28

    Small interfering RNA (siRNA) technology has vast potential for functional genomics and development of therapeutics. However, it faces many obstacles predominantly instability of siRNAs due to nuclease digestion and subsequently biologically short half-life. Chemical modifications in siRNAs provide means to overcome these shortcomings and improve their stability and potency. Despite enormous utility bioinformatics resource of these chemically modified siRNAs (cm-siRNAs) is lacking. Therefore, we have developed siRNAmod, a specialized databank for chemically modified siRNAs. Currently, our repository contains a total of 4894 chemically modified-siRNA sequences, comprising 128 unique chemical modifications on different positions with various permutations and combinations. It incorporates important information on siRNA sequence, chemical modification, their number and respective position, structure, simplified molecular input line entry system canonical (SMILES), efficacy of modified siRNA, target gene, cell line, experimental methods, reference etc. It is developed and hosted using Linux Apache MySQL PHP (LAMP) software bundle. Standard user-friendly browse, search facility and analysis tools are also integrated. It would assist in understanding the effect of chemical modifications and further development of stable and efficacious siRNAs for research as well as therapeutics. siRNAmod is freely available at: http://crdd.osdd.net/servers/sirnamod.

  1. GPU-powered model analysis with PySB/cupSODA.

    PubMed

    Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F

    2017-11-01

    A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  2. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  3. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Dean J.; Harding, Lee T.

    Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less

  5. National Trends of Simple Prostatectomy for Benign Prostatic Hyperplasia With an Analysis of Risk Factors for Adverse Perioperative Outcomes.

    PubMed

    Pariser, Joseph J; Pearce, Shane M; Patel, Sanjay G; Bales, Gregory T

    2015-10-01

    To examine the national trends of simple prostatectomy (SP) for benign prostatic hyperplasia (BPH) focusing on perioperative outcomes and risk factors for complications. The National Inpatient Sample (2002-2012) was utilized to identify patients with BPH undergoing SP. Analysis included demographics, hospital details, associated procedures, and operative approach (open, robotic, or laparoscopic). Outcomes included complications, length of stay, charges, and mortality. Multivariate logistic regression was used to determine the risk factors for perioperative complications. Linear regression was used to assess the trends in the national annual utilization of SP. The study population included 35,171 patients. Median length of stay was 4 days (interquartile range 3-6). Cystolithotomy was performed concurrently in 6041 patients (17%). The overall complication rate was 28%, with bleeding occurring most commonly. In total, 148 (0.4%) patients experienced in-hospital mortality. On multivariate analysis, older age, black race, and overall comorbidity were associated with greater risk of complications while the use of a minimally invasive approach and concurrent cystolithotomy had a decreased risk. Over the study period, the national use of simple prostatectomy decreased, on average, by 145 cases per year (P = .002). By 2012, 135/2580 procedures (5%) were performed using a minimally invasive approach. The nationwide utilization of SP for BPH has decreased. Bleeding complications are common, but perioperative mortality is low. Patients who are older, black race, or have multiple comorbidities are at higher risk of complications. Minimally invasive approaches, which are becoming increasingly utilized, may reduce perioperative morbidity. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Establishing Linux Clusters for High-Performance Computing (HPC) at NPS

    DTIC Science & Technology

    2004-09-01

    52 e. Intel Roll..................................................................................53 f. Area51 Roll...results of generating md5summ for Area51 roll. All the file information is available. This number can be used to be checked against the number that the...vendor provides fro the particular piece of software. ......51 Figure 22 The given md5summ for Area51 roll form the download site. This number can

  7. Stonix, Version 0.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-05-13

    STONIX is a program for configuring UNIX and Linux computer operating systems. It applies configurations based on the guidance from publicly accessible resources such as: NSA Guides, DISA STIGs, the Center for Internet Security (CIS), USGCB and vendor security documentation. STONIX is written in the Python programming language using the QT4 and PyQT4 libraries to provide a GUI. The code is designed to be easily extensible and customizable.

  8. Exploiting the Multi-Service Domain Protecting Interface

    DTIC Science & Technology

    2012-10-17

    Linux OpenVPN and IPSec VLAN services subsystems. Essentially, MSDPI becomes the transport mechanism for these subsystems. For the RIB, LSP, and...includes those necessary files to build a complete LiveCD system For example, adding various configuration files: ifcfg-eth?, ifcfg-ib?, openvpn ...aka IP address), openvpn files, specific files in the etc/sysconfig directory. %prep %build %install rm -rf $RPM_BUILD_ROOT mkdir -p

  9. Research and Development of Collaborative Environments for Command and Control

    DTIC Science & Technology

    2011-05-01

    at any state of building. The viewer tool presents the designed model with 360-degree perspective views even after regeneration of the design, which...and it shows the following prompt. GUM > APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...11 First initialize the microSD card by typing GUM > mmcinit Then erase the old Linux kernel and the root file system on the flash memory

  10. Generating Computer Forensic Super Timelines under Linux: A Comprehensive Guide for Windows-based Disk Images

    DTIC Science & Technology

    2011-10-01

    mémorandum technique fournit donc une description détaillée de l’approche des auteurs pour produire des calendriers des événements plus détaillés dans...SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT

  11. A database for coconut crop improvement.

    PubMed

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-12-08

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.

  12. Level-2 Milestone 5213. CTS-1 Contract Award Completed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, Matt

    2015-09-24

    This report documents the fact that the first commodity technology (CT) system contract award, CTS-1, has been completed. The description of the milestone is: Based on Tri-Lab CTS-1 process and review, LLNL successfully awards the procurement for the next-generation Tri-Lab Linux CTS-1. The milestone completion criterion is: Signed contract. The milestone was completed on September 24th. 2015.

  13. Android Video Streaming

    DTIC Science & Technology

    2014-05-01

    natural choice. In this document, we describe several aspects of video streaming and the challenges of performing video streaming between Android-based...client application was needed. Typically something like VideoLAN Client ( VLC ) is used for this purpose in a desktop environment. However, while VLC is...a very mature application on Windows and Linux, VLC for Android is still in a beta testing phase, and versions have only been developed to work

  14. Wiretapping the Internet

    NASA Astrophysics Data System (ADS)

    Antonelli, Charles J.; Honeyman, Peter

    2001-02-01

    This paper describes the Advanced Packet Vault, a technology for creating such a record by collecting and securely storing all packets observed on a network, with a scalable architecture intended to support network speeds in excess of 100 Mbps. Encryption is used to preserve users' security and privacy, permitting selected traffic to be made available without revealing other traffic. The Vault implementation, based on Linux and OpenBSD, is open-source.

  15. Multi-Target Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria

    2005-01-01

    This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.

  16. Improved Distance Learning Environment For Marine Forces Reserve

    DTIC Science & Technology

    2016-09-01

    keyboard, to 20 form a desktop computer . Laptop computers share similar components but add mobility to the user. If additional desktop computers ...for stationary computing devices such as desktop PCs and laptops include the Microsoft Windows, Mac OS, and Linux families of OSs 44 (Hopkins...opportunities to all Marines. For active duty Marines, government-provided desktops and laptops (GPDLs) typically support DL T&E or learning resource

  17. The Design of NetSecLab: A Small Competition-Based Network Security Lab

    ERIC Educational Resources Information Center

    Lee, C. P.; Uluagac, A. S.; Fairbanks, K. D.; Copeland, J. A.

    2011-01-01

    This paper describes a competition-style of exercise to teach system and network security and to reinforce themes taught in class. The exercise, called NetSecLab, is conducted on a closed network with student-formed teams, each with their own Linux system to defend and from which to launch attacks. Students are expected to learn how to: 1) install…

  18. Hardware Acceleration for Cyber Security

    DTIC Science & Technology

    2010-11-01

    perform different approaches. It includes behavioral analysis, by means of NetFlow monitoring, as well as packet content analysis, so called Deep...Interface (API). The example of such application is NetFlow exporter described in [5]. • We provide modified libpcap library using libsze2 API. This...cards. The software applications using NIFIC include FlowMon NetFlow /IPFIX generator, Wireshark packet analyzer, iptables - Linux kernel firewall, deep

  19. APL-UW Deep Water Propagation: Philippine Sea Signal Physics and North Pacific Ambient Noise and NPANL Support

    DTIC Science & Technology

    2014-09-30

    involve us in workshops focusing on the impact of ambient noise on the acoustic marine soundscape , and and invite DOS Kauai Linux CPU 294± ≈ 15 286... SoundScapes ” and a planning letter titled “GLOSS: GLobal Ocean SoundScapes : Phase I Connecting the worlds oceans through sound” have been produced in order

  20. MemAxes Visualization Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardware advancements such as Intel's PEBS and AMD's IBS, as well as software developments such as the perf_event API in Linux have made available the acquisition of memory access samples with performance information. MemAxes is a visualization and analysis tool for memory access sample data. By mapping the samples to their associated code, variables, node topology, and application dataset, MemAxes provides intuitive views of the data.

Top