Sample records for open source computer

  1. Research on OpenStack of open source cloud computing in colleges and universities’ computer room

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Zhang, Dandan

    2017-06-01

    In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.

  2. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  3. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  4. Simulation of partially coherent light propagation using parallel computing devices

    NASA Astrophysics Data System (ADS)

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  5. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  6. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  7. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  8. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    PubMed

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  9. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  10. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  11. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    ERIC Educational Resources Information Center

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  13. Vision 2010: The Future of Higher Education Business and Learning Applications

    ERIC Educational Resources Information Center

    Carey, Patrick; Gleason, Bernard

    2006-01-01

    The global software industry is in the midst of a major evolutionary shift--one based on open computing--and this trend, like many transformative trends in technology, is being led by the IT staffs and academic computing faculty of the higher education industry. The elements of this open computing approach are open source, open standards, open…

  14. ACToR Chemical Structure processing using Open Source ChemInformatics Libraries (FutureToxII)

    EPA Science Inventory

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from ove...

  15. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  16. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  17. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  18. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  19. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    PubMed Central

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582

  20. Open source system OpenVPN in a function of Virtual Private Network

    NASA Astrophysics Data System (ADS)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  1. Open source bioimage informatics for cell biology

    PubMed Central

    Swedlow, Jason R.; Eliceiri, Kevin W.

    2009-01-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518

  2. SymPy: Symbolic computing in python

    DOE PAGES

    Meurer, Aaron; Smith, Christopher P.; Paprocki, Mateusz; ...

    2017-01-02

    Here, SymPy is a full featured computer algebra system (CAS) written in the Python programming language. It is open source, being licensed under the extremely permissive 3-clause BSD license. SymPy was started by Ondrej Certik in 2005, and it has since grown into a large open source project, with over 500 contributors.

  3. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  4. Human genome and open source: balancing ethics and business.

    PubMed

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives.

  5. Pteros: fast and easy to use open-source C++ library for molecular analysis.

    PubMed

    Yesylevskyy, Semen O

    2012-07-15

    An open-source Pteros library for molecular modeling and analysis of molecular dynamics trajectories for C++ programming language is introduced. Pteros provides a number of routine analysis operations ranging from reading and writing trajectory files and geometry transformations to structural alignment and computation of nonbonded interaction energies. The library features asynchronous trajectory reading and parallel execution of several analysis routines, which greatly simplifies development of computationally intensive trajectory analysis algorithms. Pteros programming interface is very simple and intuitive while the source code is well documented and easily extendible. Pteros is available for free under open-source Artistic License from http://sourceforge.net/projects/pteros/. Copyright © 2012 Wiley Periodicals, Inc.

  6. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  7. The Role of Standards in Cloud-Computing Interoperability

    DTIC Science & Technology

    2012-10-01

    services are not shared outside the organization. CloudStack, Eucalyptus, HP, Microsoft, OpenStack , Ubuntu, and VMWare provide tools for building...center requirements • Developing usage models for cloud ven- dors • Independent IT consortium OpenStack http://www.openstack.org • Open-source...software for running private clouds • Currently consists of three core software projects: OpenStack Compute (Nova), OpenStack Object Storage (Swift

  8. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  9. Numerical Predictions of Mode Reflections in an Open Circular Duct: Comparison with Theory

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray

    2015-01-01

    The NASA Broadband Aeroacoustic Stator Simulation code was used to compute the acoustic field for higher-order modes in a circular duct geometry. To test the accuracy of the results computed by the code, the duct was terminated by an open end with an infinite flange or no flange. Both open end conditions have a theoretical solution that was used to compare with the computed results. Excellent comparison for reflection matrix values was achieved after suitable refinement of the grid at the open end. The study also revealed issues with the level of the mode amplitude introduced into the acoustic held from the source boundary and the amount of reflection that occurred at the source boundary when a general nonreflecting boundary condition was applied.

  10. Using Open Source Software in Visual Simulation Development

    DTIC Science & Technology

    2005-09-01

    increased the use of the technology in training activities. Using open source/free software tools in the process can expand these possibilities...resulting in even greater cost reduction and allowing the flexibility needed in a training environment. This thesis presents a configuration and architecture...to be used when developing training visual simulations using both personal computers and open source tools. Aspects of the requirements needed in a

  11. Open Drug Discovery Toolkit (ODDT): a new open-source player in the drug discovery field.

    PubMed

    Wójcikowski, Maciej; Zielenkiewicz, Piotr; Siedlecki, Pawel

    2015-01-01

    There has been huge progress in the open cheminformatics field in both methods and software development. Unfortunately, there has been little effort to unite those methods and software into one package. We here describe the Open Drug Discovery Toolkit (ODDT), which aims to fulfill the need for comprehensive and open source drug discovery software. The Open Drug Discovery Toolkit was developed as a free and open source tool for both computer aided drug discovery (CADD) developers and researchers. ODDT reimplements many state-of-the-art methods, such as machine learning scoring functions (RF-Score and NNScore) and wraps other external software to ease the process of developing CADD pipelines. ODDT is an out-of-the-box solution designed to be easily customizable and extensible. Therefore, users are strongly encouraged to extend it and develop new methods. We here present three use cases for ODDT in common tasks in computer-aided drug discovery. Open Drug Discovery Toolkit is released on a permissive 3-clause BSD license for both academic and industrial use. ODDT's source code, additional examples and documentation are available on GitHub (https://github.com/oddt/oddt).

  12. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Open-Source Syringe Pump Library

    PubMed Central

    Wijnen, Bas; Hunt, Emily J.; Anzalone, Gerald C.; Pearce, Joshua M.

    2014-01-01

    This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications. PMID:25229451

  14. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  15. An Open Source Rapid Computer Aided Control System Design Toolchain Using Scilab, Scicos and RTAI Linux

    NASA Astrophysics Data System (ADS)

    Bouchpan-Lerust-Juéry, L.

    2007-08-01

    Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.

  16. Forward Field Computation with OpenMEEG

    PubMed Central

    Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen

    2011-01-01

    To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231

  17. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  18. Open source tools for large-scale neuroscience.

    PubMed

    Freeman, Jeremy

    2015-06-01

    New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  19. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  20. Developing Open Source Software To Advance High End Computing. Report to the President.

    ERIC Educational Resources Information Center

    National Coordination Office for Information Technology Research and Development, Arlington, VA.

    This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…

  1. STORMSeq: an open-source, user-friendly pipeline for processing personal genomics data in the cloud.

    PubMed

    Karczewski, Konrad J; Fernald, Guy Haskin; Martin, Alicia R; Snyder, Michael; Tatonetti, Nicholas P; Dudley, Joel T

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5-10 hours to process a full exome sequence and $30 and 3-8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2.

  2. Sharing the Code.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2003-01-01

    Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)

  3. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  4. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  5. STORMSeq: An Open-Source, User-Friendly Pipeline for Processing Personal Genomics Data in the Cloud

    PubMed Central

    Karczewski, Konrad J.; Fernald, Guy Haskin; Martin, Alicia R.; Snyder, Michael; Tatonetti, Nicholas P.; Dudley, Joel T.

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5–10 hours to process a full exome sequence and $30 and 3–8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2. PMID:24454756

  6. Hypersonic simulations using open-source CFD and DSMC solvers

    NASA Astrophysics Data System (ADS)

    Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.

    2016-11-01

    Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meurer, Aaron; Smith, Christopher P.; Paprocki, Mateusz

    Here, SymPy is a full featured computer algebra system (CAS) written in the Python programming language. It is open source, being licensed under the extremely permissive 3-clause BSD license. SymPy was started by Ondrej Certik in 2005, and it has since grown into a large open source project, with over 500 contributors.

  8. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  9. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  10. Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato

    2017-04-01

    Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.

  11. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.

  12. Computation in Classical Mechanics with Easy Java Simulations (EJS)

    NASA Astrophysics Data System (ADS)

    Cox, Anne J.

    2006-12-01

    Let your students enjoy creating animations and incorporating some computational physics into your Classical Mechanics course. This talk will demonstrate the use of an Open Source Physics package, Easy Java Simulations (EJS), in an already existing sophomore/junior level Classical Mechanics course. EJS allows for incremental introduction of computational physics into existing courses because it is easy to use (for instructors and students alike) and it is open source. Students can use this tool for numerical solutions to problems (as they can with commercial systems: Mathcad and Mathematica), but they can also generate their own animations. For example, students in Classical Mechanics use Lagrangian mechanics to solve a problem, and then use EJS not only to numerically solve the differential equations, but to show the associated motion (and check their answers). EJS, developed by Francisco Esquembre (http://fem.um.es/Ejs/), is built on the OpenSource Physics framework (http://www.opensourcephysics.org/) supported through NSF DUE0442581.

  13. Open-Source Software in Computational Research: A Case Study

    DOE PAGES

    Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; ...

    2008-01-01

    A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

  14. Removing a barrier to computer-based outbreak and disease surveillance--the RODS Open Source Project.

    PubMed

    Espino, Jeremy U; Wagner, M; Szczepaniak, C; Tsui, F C; Su, H; Olszewski, R; Liu, Z; Chapman, W; Zeng, X; Ma, L; Lu, Z; Dara, J

    2004-09-24

    Computer-based outbreak and disease surveillance requires high-quality software that is well-supported and affordable. Developing software in an open-source framework, which entails free distribution and use of software and continuous, community-based software development, can produce software with such characteristics, and can do so rapidly. The objective of the Real-Time Outbreak and Disease Surveillance (RODS) Open Source Project is to accelerate the deployment of computer-based outbreak and disease surveillance systems by writing software and catalyzing the formation of a community of users, developers, consultants, and scientists who support its use. The University of Pittsburgh seeded the Open Source Project by releasing the RODS software under the GNU General Public License. An infrastructure was created, consisting of a website, mailing lists for developers and users, designated software developers, and shared code-development tools. These resources are intended to encourage growth of the Open Source Project community. Progress is measured by assessing website usage, number of software downloads, number of inquiries, number of system deployments, and number of new features or modules added to the code base. During September--November 2003, users generated 5,370 page views of the project website, 59 software downloads, 20 inquiries, one new deployment, and addition of four features. Thus far, health departments and companies have been more interested in using the software as is than in customizing or developing new features. The RODS laboratory anticipates that after initial installation has been completed, health departments and companies will begin to customize the software and contribute their enhancements to the public code base.

  15. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    DOE PAGES

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...

    2015-07-08

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.

  16. Reflections on the role of open source in health information system interoperability.

    PubMed

    Sfakianakis, S; Chronaki, C E; Chiarugi, F; Conforti, F; Katehakis, D G

    2007-01-01

    This paper reflects on the role of open source in health information system interoperability. Open source is a driving force in computer science research and the development of information systems. It facilitates the sharing of information and ideas, enables evolutionary development and open collaborative testing of code, and broadens the adoption of interoperability standards. In health care, information systems have been developed largely ad hoc following proprietary specifications and customized design. However, the wide deployment of integrated services such as Electronic Health Records (EHRs) over regional health information networks (RHINs) relies on interoperability of the underlying information systems and medical devices. This reflection is built on the experiences of the PICNIC project that developed shared software infrastructure components in open source for RHINs and the OpenECG network that offers open source components to lower the implementation cost of interoperability standards such as SCP-ECG, in electrocardiography. Open source components implementing standards and a community providing feedback from real-world use are key enablers of health care information system interoperability. Investing in open source is investing in interoperability and a vital aspect of a long term strategy towards comprehensive health services and clinical research.

  17. Open source Modeling and optimization tools for Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peles, S.

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less

  18. The SAMI2 Open Source Project

    NASA Astrophysics Data System (ADS)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  19. Predictive Factors in Conflict: Assessing the Likelihood of a Preemptive Strike by Israel on Iran Using a Computer Model

    DTIC Science & Technology

    2013-03-01

    Proliferation Treaty OSINT Open Source Intelligence SAFF Safing, Arming, Fuzing, Firing SIAM Situational Influence Assessment Module SME Subject...expertise. One of the analysts can also be trained to tweak CAST logic as needed. In this initial build, only open-source intelligence ( OSINT ) will

  20. Computational Fluids Domain Reduction to a Simplified Fluid Network

    DTIC Science & Technology

    2012-04-19

    readily available read/ write software library. Code components from the open source projects OpenFoam and Paraview were explored for their adaptability...to the project. Both Paraview and OpenFoam read polyhedral mesh. OpenFoam does not read results data. Paraview actually allows for user “filters

  1. Open-source Software for Exoplanet Atmospheric Modeling

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  2. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  3. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  4. CSNS computing environment Based on OpenStack

    NASA Astrophysics Data System (ADS)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  5. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  6. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  7. Web Solutions Inspire Cloud Computing Software

    NASA Technical Reports Server (NTRS)

    2013-01-01

    An effort at Ames Research Center to standardize NASA websites unexpectedly led to a breakthrough in open source cloud computing technology. With the help of Rackspace Inc. of San Antonio, Texas, the resulting product, OpenStack, has spurred the growth of an entire industry that is already employing hundreds of people and generating hundreds of millions in revenue.

  8. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  9. Shaping Software Engineering Curricula Using Open Source Communities: A Case Study

    ERIC Educational Resources Information Center

    Bowring, James; Burke, Quinn

    2016-01-01

    This paper documents four years of a novel approach to teaching a two-course sequence in software engineering as part of the ABET-accredited computer science curriculum at the College of Charleston. This approach is team-based and centers on learning software engineering in the context of open source software projects. In the first course, teams…

  10. The Pixhawk Open-Source Computer Vision Framework for Mavs

    NASA Astrophysics Data System (ADS)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  11. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    PubMed

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  13. "Do-It-Yourself" reliable pH-stat device by using open-source software, inexpensive hardware and available laboratory equipment

    PubMed Central

    Kragic, Rastislav; Kostic, Mirjana

    2018-01-01

    In this paper, we present the construction of a reliable and inexpensive pH stat device, by using open-source “OpenPhControl” software, inexpensive hardware (a peristaltic and a syringe pump, Arduino, a step motor…), readily available laboratory devices: a pH meter, a computer, a webcam, and some 3D printed parts. We provide a methodology for the design, development and test results of each part of the device, as well as of the entire system. In addition to dosing reagents by means of a low-cost peristaltic pump, we also present carefully controlled dosing of reagents by an open-source syringe pump. The upgrading of the basic open-source syringe pump is given in terms of pump control and application of a larger syringe. In addition to the basic functions of pH stat, i.e. pH value measurement and maintenance, an improvement allowing the device to be used for potentiometric titration has been made as well. We have demonstrated the device’s utility when applied for cellulose fibers oxidation with 2,2,6,6-tetramethylpiperidine-1-oxyl radical, i.e. for TEMPO-mediated oxidation. In support of this, we present the results obtained for the oxidation kinetics, the consumption of added reagent and experimental repeatability. Considering that the open-source scientific tools are available to everyone, and that researchers can construct and adjust the device according to their needs, as well as, that the total cost of the open-source pH stat device, excluding the existing laboratory equipment (pH meter, computer and glossary) was less than 150 EUR, we believe that, at a small fraction of the cost of available commercial offers, our open-source pH stat can significantly improve experimental work where the use of pH stat is necessary. PMID:29509793

  14. "Do-It-Yourself" reliable pH-stat device by using open-source software, inexpensive hardware and available laboratory equipment.

    PubMed

    Milanovic, Jovana Z; Milanovic, Predrag; Kragic, Rastislav; Kostic, Mirjana

    2018-01-01

    In this paper, we present the construction of a reliable and inexpensive pH stat device, by using open-source "OpenPhControl" software, inexpensive hardware (a peristaltic and a syringe pump, Arduino, a step motor…), readily available laboratory devices: a pH meter, a computer, a webcam, and some 3D printed parts. We provide a methodology for the design, development and test results of each part of the device, as well as of the entire system. In addition to dosing reagents by means of a low-cost peristaltic pump, we also present carefully controlled dosing of reagents by an open-source syringe pump. The upgrading of the basic open-source syringe pump is given in terms of pump control and application of a larger syringe. In addition to the basic functions of pH stat, i.e. pH value measurement and maintenance, an improvement allowing the device to be used for potentiometric titration has been made as well. We have demonstrated the device's utility when applied for cellulose fibers oxidation with 2,2,6,6-tetramethylpiperidine-1-oxyl radical, i.e. for TEMPO-mediated oxidation. In support of this, we present the results obtained for the oxidation kinetics, the consumption of added reagent and experimental repeatability. Considering that the open-source scientific tools are available to everyone, and that researchers can construct and adjust the device according to their needs, as well as, that the total cost of the open-source pH stat device, excluding the existing laboratory equipment (pH meter, computer and glossary) was less than 150 EUR, we believe that, at a small fraction of the cost of available commercial offers, our open-source pH stat can significantly improve experimental work where the use of pH stat is necessary.

  15. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  16. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  17. Profugus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Thomas; Hamilton, Steven; Slattery, Stuart

    Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less

  18. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  19. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  20. 3D reconstruction software comparison for short sequences

    NASA Astrophysics Data System (ADS)

    Strupczewski, Adam; Czupryński, BłaŻej

    2014-11-01

    Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.

  1. Software for Computing, Archiving, and Querying Semisimple Braided Monoidal Category Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This software package collects various open source and freely available codes and algorithms to compute and archive the categorical data for certain semisimple braided monoidal categories. In particular, it computes the data for of group theoretical categories for academic research.

  2. Kudi: A free open-source python library for the analysis of properties along reaction paths.

    PubMed

    Vogt-Geisse, Stefan

    2016-05-01

    With increasing computational capabilities, an ever growing amount of data is generated in computational chemistry that contains a vast amount of chemically relevant information. It is therefore imperative to create new computational tools in order to process and extract this data in a sensible way. Kudi is an open source library that aids in the extraction of chemical properties from reaction paths. The straightforward structure of Kudi makes it easy to use for users and allows for effortless implementation of new capabilities, and extension to any quantum chemistry package. A use case for Kudi is shown for the tautomerization reaction of formic acid. Kudi is available free of charge at www.github.com/stvogt/kudi.

  3. pyNS: an open-source framework for 0D haemodynamic modelling.

    PubMed

    Manini, Simone; Antiga, Luca; Botti, Lorenzo; Remuzzi, Andrea

    2015-06-01

    A number of computational approaches have been proposed for the simulation of haemodynamics and vascular wall dynamics in complex vascular networks. Among them, 0D pulse wave propagation methods allow to efficiently model flow and pressure distributions and wall displacements throughout vascular networks at low computational costs. Although several techniques are documented in literature, the availability of open-source computational tools is still limited. We here present python Network Solver, a modular solver framework for 0D problems released under a BSD license as part of the archToolkit ( http://archtk.github.com ). As an application, we describe patient-specific models of the systemic circulation and detailed upper extremity for use in the prediction of maturation after surgical creation of vascular access for haemodialysis.

  4. OpenStructure: a flexible software framework for computational structural biology.

    PubMed

    Biasini, Marco; Mariani, Valerio; Haas, Jürgen; Scheuber, Stefan; Schenk, Andreas D; Schwede, Torsten; Philippsen, Ansgar

    2010-10-15

    Developers of new methods in computational structural biology are often hampered in their research by incompatible software tools and non-standardized data formats. To address this problem, we have developed OpenStructure as a modular open source platform to provide a powerful, yet flexible general working environment for structural bioinformatics. OpenStructure consists primarily of a set of libraries written in C++ with a cleanly designed application programmer interface. All functionality can be accessed directly in C++ or in a Python layer, meeting both the requirements for high efficiency and ease of use. Powerful selection queries and the notion of entity views to represent these selections greatly facilitate the development and implementation of algorithms on structural data. The modular integration of computational core methods with powerful visualization tools makes OpenStructure an ideal working and development environment. Several applications, such as the latest versions of IPLT and QMean, have been implemented based on OpenStructure-demonstrating its value for the development of next-generation structural biology algorithms. Source code licensed under the GNU lesser general public license and binaries for MacOS X, Linux and Windows are available for download at http://www.openstructure.org. torsten.schwede@unibas.ch Supplementary data are available at Bioinformatics online.

  5. Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows

    NASA Astrophysics Data System (ADS)

    Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.

    2014-12-01

    The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.

  6. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  7. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  8. MetAMOS: a modular and open source metagenomic assembly and analysis pipeline

    PubMed Central

    2013-01-01

    We describe MetAMOS, an open source and modular metagenomic assembly and analysis pipeline. MetAMOS represents an important step towards fully automated metagenomic analysis, starting with next-generation sequencing reads and producing genomic scaffolds, open-reading frames and taxonomic or functional annotations. MetAMOS can aid in reducing assembly errors, commonly encountered when assembling metagenomic samples, and improves taxonomic assignment accuracy while also reducing computational cost. MetAMOS can be downloaded from: https://github.com/treangen/MetAMOS. PMID:23320958

  9. Menu-driven cloud computing and resource sharing for R and Bioconductor

    PubMed Central

    Bolouri, Hamid; Angerman, Michael

    2011-01-01

    Summary: We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. Availability and Implementation: CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. Contact: hbolouri@fhcrc.org PMID:21685055

  10. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    PubMed Central

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J; Schuster, Heiko; Ternette, Nicola; Alpízar, Adán; Schittenhelm, Ralf B; Ramarathinam, Sri H; Lindestam Arlehamn, Cecilia S; Chiek Koh, Ching; Gillet, Ludovic C; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David S; Deutsch, Eric W; Moritz, Robert L; Purcell, Anthony W; Rammensee, Hans-Georg; Stevanovic, Stefan; Aebersold, Ruedi

    2015-01-01

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies. DOI: http://dx.doi.org/10.7554/eLife.07661.001 PMID:26154972

  11. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  12. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  13. Low Cost, Scalable Proteomics Data Analysis Using Amazon's Cloud Computing Services and Open Source Search Algorithms

    PubMed Central

    Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.

    2009-01-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578

  14. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    PubMed

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  15. Development of AN Open-Source Automatic Deformation Monitoring System for Geodetical and Geotechnical Measurements

    NASA Astrophysics Data System (ADS)

    Engel, P.; Schweimler, B.

    2016-04-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the Neubrandenburg University of Applied Sciences (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  16. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  17. Crux: Rapid Open Source Protein Tandem Mass Spectrometry Analysis

    PubMed Central

    2015-01-01

    Efficiently and accurately analyzing big protein tandem mass spectrometry data sets requires robust software that incorporates state-of-the-art computational, machine learning, and statistical methods. The Crux mass spectrometry analysis software toolkit (http://cruxtoolkit.sourceforge.net) is an open source project that aims to provide users with a cross-platform suite of analysis tools for interpreting protein mass spectrometry data. PMID:25182276

  18. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  19. HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization

    NASA Astrophysics Data System (ADS)

    LAVIE, Baptiste

    2015-12-01

    Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).

  20. Post hoc support vector machine learning for impedimetric biosensors based on weak protein-ligand interactions.

    PubMed

    Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S

    2018-04-30

    Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid detection, facilitating use by a broad range of impedimetric biosensor users. This post hoc analysis tool can serve as a launchpad for the convergence of nanobiosensors in planetary health monitoring applications based on mobile phone hardware.

  1. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  2. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  3. Sirepo for Synchrotron Radiation Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul; Rakitin, Maksim

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jinja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is the Synchrotron Radiation Workshop (SRW). SRW computes synchrotron radiation from relativistic electrons in arbitrary magnetic fields and propagates the radiation wavefronts through optical beamlines. SRW is open source and is primarily supported by Dr. Oleg Chubar of NSLS-II at Brookhaven National Laboratory.« less

  4. MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions.

    PubMed

    Klemm, Martin; Kirchner, Thomas; Gröhl, Janek; Cheray, Dominique; Nolden, Marco; Seitel, Alexander; Hoppe, Harald; Maier-Hein, Lena; Franz, Alfred M

    2017-03-01

    Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org ).

  5. Open-source tools for data mining.

    PubMed

    Zupan, Blaz; Demsar, Janez

    2008-03-01

    With a growing volume of biomedical databases and repositories, the need to develop a set of tools to address their analysis and support knowledge discovery is becoming acute. The data mining community has developed a substantial set of techniques for computational treatment of these data. In this article, we discuss the evolution of open-source toolboxes that data mining researchers and enthusiasts have developed over the span of a few decades and review several currently available open-source data mining suites. The approaches we review are diverse in data mining methods and user interfaces and also demonstrate that the field and its tools are ready to be fully exploited in biomedical research.

  6. Creating a Rackspace and NASA Nebula compatible cloud using the OpenStack project (Invited)

    NASA Astrophysics Data System (ADS)

    Clark, R.

    2010-12-01

    NASA and Rackspace have both provided technology to the OpenStack that allows anyone to create a private Infrastructure as a Service (IaaS) cloud using open source software and commodity hardware. OpenStack is designed and developed completely in the open and with an open governance process. NASA donated Nova, which powers the compute portion of NASA Nebula Cloud Computing Platform, and Rackspace donated Swift, which powers Rackspace Cloud Files. The project is now in continuous development by NASA, Rackspace, and hundreds of other participants. When you create a private cloud using Openstack, you will have the ability to easily interact with your private cloud, a government cloud, and an ecosystem of public cloud providers, using the same API.

  7. RTSPM: real-time Linux control software for scanning probe microscopy.

    PubMed

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  8. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  9. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  10. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  11. An open source high-performance solution to extract surface water drainage networks from diverse terrain conditions

    USGS Publications Warehouse

    Stanislawski, Larry V.; Survila, Kornelijus; Wendel, Jeffrey; Liu, Yan; Buttenfield, Barbara P.

    2018-01-01

    This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.

  12. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  13. Guide to NavyFOAM V1.0

    DTIC Science & Technology

    2011-04-01

    NavyFOAM has been developed using an open-source CFD software tool-kit ( OpenFOAM ) that draws heavily upon object-oriented programming. The...numerical methods and the physical models in the original version of OpenFOAM have been upgraded in an effort to improve accuracy and robustness of...computational fluid dynamics OpenFOAM , Object Oriented Programming (OOP) (CFD), NavyFOAM, 16. SECURITY CLASSIFICATION OF: a. REPORT UNCLASSIFIED b

  14. ggCyto: Next Generation Open-Source Visualization Software for Cytometry.

    PubMed

    Van, Phu; Jiang, Wenxin; Gottardo, Raphael; Finak, Greg

    2018-06-01

    Open source software for computational cytometry has gained in popularity over the past few years. Efforts such as FlowCAP, the Lyoplate and Euroflow projects have highlighted the importance of efforts to standardize both experimental and computational aspects of cytometry data analysis. The R/BioConductor platform hosts the largest collection of open source cytometry software covering all aspects of data analysis and providing infrastructure to represent and analyze cytometry data with all relevant experimental, gating, and cell population annotations enabling fully reproducible data analysis. Data visualization frameworks to support this infrastructure have lagged behind. ggCyto is a new open-source BioConductor software package for cytometry data visualization built on ggplot2 that enables ggplot-like functionality with the core BioConductor flow cytometry data structures. Amongst its features are the ability to transform data and axes on-the-fly using cytometry-specific transformations, plot faceting by experimental meta-data variables, and partial matching of channel, marker and cell populations names to the contents of the BioConductor cytometry data structures. We demonstrate the salient features of the package using publicly available cytometry data with complete reproducible examples in a supplementary material vignette. https://bioconductor.org/packages/devel/bioc/html/ggcyto.html. gfinak@fredhutch.org. Supplementary data are available at Bioinformatics online and at http://rglab.org/ggcyto/.

  15. Chaste: An Open Source C++ Library for Computational Physiology and Biology

    PubMed Central

    Mirams, Gary R.; Arthurs, Christopher J.; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Corrias, Alberto; Davit, Yohan; Dunn, Sara-Jane; Fletcher, Alexander G.; Harvey, Daniel G.; Marsh, Megan E.; Osborne, James M.; Pathmanathan, Pras; Pitt-Francis, Joe; Southern, James; Zemzemi, Nejib; Gavaghan, David J.

    2013-01-01

    Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials. PMID:23516352

  16. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Open Source Tools for Numerical Simulation of Urban Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Nottrott, A.; Tan, S. M.; He, Y.

    2016-12-01

    There is a global movement toward urbanization. Approximately 7% of the global population lives in just 28 megacities, occupying less than 0.1% of the total land area used by human activity worldwide. These cities contribute a significant fraction of the global budget of anthropogenic primary pollutants and greenhouse gasses. The 27 largest cities consume 9.9%, 9.3%, 6.7% and 3.0% of global gasoline, electricity, energy and water use, respectively. This impact motivates novel approaches to quantify and mitigate the growing contribution of megacity emissions to global climate change. Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model methane (CH4) emissions from various components of the natural gas distribution system, to investigate the impact of urban meteorology on mobile CH4 measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of the plume due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments.

  18. Free and open-source software application for the evaluation of coronary computed tomography angiography images.

    PubMed

    Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de

    2012-10-01

    The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.

  19. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.

  20. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  1. Cloud Computing for DoD

    DTIC Science & Technology

    2012-05-01

    cloud computing 17 NASA Nebula Platform •  Cloud computing pilot program at NASA Ames •  Integrates open-source components into seamless, self...Mission support •  Education and public outreach (NASA Nebula , 2010) 18 NSF Supported Cloud Research •  Support for Cloud Computing in...Mell, P. & Grance, T. (2011). The NIST Definition of Cloud Computing. NIST Special Publication 800-145 •  NASA Nebula (2010). Retrieved from

  2. OpenID Connect as a security service in cloud-based medical imaging systems.

    PubMed

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-04-01

    The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.

  3. Mobile Technologies and Augmented Reality in Open Education

    ERIC Educational Resources Information Center

    Kurubacak, Gulsun, Ed.; Altinpulluk, Hakan, Ed.

    2017-01-01

    Novel trends and innovations have enhanced contemporary educational environments. When applied properly, these computing advances can create enriched learning opportunities for students. "Mobile Technologies and Augmented Reality in Open Education" is a pivotal reference source for the latest academic research on the integration of…

  4. IGT-Open: An open-source, computerized version of the Iowa Gambling Task.

    PubMed

    Dancy, Christopher L; Ritter, Frank E

    2017-06-01

    The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture.

  5. Holonomic quantum computation in the presence of decoherence.

    PubMed

    Fuentes-Guridi, I; Girelli, F; Livine, E

    2005-01-21

    We present a scheme to study non-Abelian adiabatic holonomies for open Markovian systems. As an application of our framework, we analyze the robustness of holonomic quantum computation against decoherence. We pinpoint the sources of error that must be corrected to achieve a geometric implementation of quantum computation completely resilient to Markovian decoherence.

  6. It's Time to Consider Open Source Software

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2007-01-01

    In 1985 Richard Stallman, a computer programmer, released "The GNU Manifesto" in which he proclaimed a golden rule: One must share computer programs. Software vendors required him to agree to license agreements that forbade sharing programs with others, but he refused to "break solidarity" with other computer users whom he assumed also wanted to…

  7. Software Solution Saves Dollars

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2004-01-01

    This article discusses computer software that can give classrooms and computer labs the capabilities of costly PC's at a small fraction of the cost. A growing number of cost-conscious school districts are finding budget relief in low-cost computer software known as "open source" that can do everything from manage school Web sites to equip…

  8. PaaS for web applications with OpenShift Origin

    NASA Astrophysics Data System (ADS)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  9. Open Source Next Generation Visualization Software for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Rinker, George

    2016-01-01

    Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).

  10. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  11. Numerical Analysis of the Cavity Flow subjected to Passive Controls Techniques

    NASA Astrophysics Data System (ADS)

    Melih Guleren, Kursad; Turk, Seyfettin; Mirza Demircan, Osman; Demir, Oguzhan

    2018-03-01

    Open-source flow solvers are getting more and more popular for the analysis of challenging flow problems in aeronautical and mechanical engineering applications. They are offered under the GNU General Public License and can be run, examined, shared and modified according to user’s requirements. SU2 and OpenFOAM are the two most popular open-source solvers in Computational Fluid Dynamics (CFD) community. In the present study, some passive control methods on the high-speed cavity flows are numerically simulated using these open-source flow solvers along with one commercial flow solver called ANSYS/Fluent. The results are compared with the available experimental data. The solver SU2 are seen to predict satisfactory the mean streamline velocity but not turbulent kinetic energy and overall averaged sound pressure level (OASPL). Whereas OpenFOAM predicts all these parameters nearly as the same levels of ANSYS/Fluent.

  12. Using R to implement spatial analysis in open source environment

    NASA Astrophysics Data System (ADS)

    Shao, Yixi; Chen, Dong; Zhao, Bo

    2007-06-01

    R is an open source (GPL) language and environment for spatial analysis, statistical computing and graphics which provides a wide variety of statistical and graphical techniques, and is highly extensible. In the Open Source environment it plays an important role in doing spatial analysis. So, to implement spatial analysis in the Open Source environment which we called the Open Source geocomputation is using the R data analysis language integrated with GRASS GIS and MySQL or PostgreSQL. This paper explains the architecture of the Open Source GIS environment and emphasizes the role R plays in the aspect of spatial analysis. Furthermore, one apt illustration of the functions of R is given in this paper through the project of constructing CZPGIS (Cheng Zhou Population GIS) supported by Changzhou Government, China. In this project we use R to implement the geostatistics in the Open Source GIS environment to evaluate the spatial correlation of land price and estimate it by Kriging Interpolation. We also use R integrated with MapServer and php to show how R and other Open Source software cooperate with each other in WebGIS environment, which represents the advantages of using R to implement spatial analysis in Open Source GIS environment. And in the end, we points out that the packages for spatial analysis in R is still scattered and the limited memory is still a bottleneck when large sum of clients connect at the same time. Therefore further work is to group the extensive packages in order or design normative packages and make R cooperate better with other commercial software such as ArcIMS. Also we look forward to developing packages for land price evaluation.

  13. Automatic tracking of red blood cells in micro channels using OpenCV

    NASA Astrophysics Data System (ADS)

    Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui

    2013-10-01

    The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.

  14. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  16. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  17. Open-source image registration for MRI-TRUS fusion-guided prostate interventions.

    PubMed

    Fedorov, Andriy; Khallaghi, Siavash; Sánchez, C Antonio; Lasso, Andras; Fels, Sidney; Tuncali, Kemal; Sugar, Emily Neubauer; Kapur, Tina; Zhang, Chenxi; Wells, William; Nguyen, Paul L; Abolmaesumi, Purang; Tempany, Clare

    2015-06-01

    We propose two software tools for non-rigid registration of MRI and transrectal ultrasound (TRUS) images of the prostate. Our ultimate goal is to develop an open-source solution to support MRI-TRUS fusion image guidance of prostate interventions, such as targeted biopsy for prostate cancer detection and focal therapy. It is widely hypothesized that image registration is an essential component in such systems. The two non-rigid registration methods are: (1) a deformable registration of the prostate segmentation distance maps with B-spline regularization and (2) a finite element-based deformable registration of the segmentation surfaces in the presence of partial data. We evaluate the methods retrospectively using clinical patient image data collected during standard clinical procedures. Computation time and Target Registration Error (TRE) calculated at the expert-identified anatomical landmarks were used as quantitative measures for the evaluation. The presented image registration tools were capable of completing deformable registration computation within 5 min. Average TRE was approximately 3 mm for both methods, which is comparable with the slice thickness in our MRI data. Both tools are available under nonrestrictive open-source license. We release open-source tools that may be used for registration during MRI-TRUS-guided prostate interventions. Our tools implement novel registration approaches and produce acceptable registration results. We believe these tools will lower the barriers in development and deployment of interventional research solutions and facilitate comparison with similar tools.

  18. Digital Textbooks. Research Brief

    ERIC Educational Resources Information Center

    Johnston, Howard

    2011-01-01

    Despite their growing popularity, digital alternatives to conventional textbooks are stirring up controversy. With the introduction of tablet computers, and the growing trend toward "cloud computing" and "open source" software, the trend is accelerating because costs are coming down and free or inexpensive materials are becoming more available.…

  19. Situated Learning in Computer Science Education

    ERIC Educational Resources Information Center

    Ben-Ari, Mordechai

    2004-01-01

    Sociocultural theories of learning such as Wenger and Lave's situated learning have been suggested as alternatives to cognitive theories of learning like constructivism. This article examines situated learning within the context of computer science (CS) education. Situated learning accurately describes some CS communities like open-source software…

  20. From, by, and for the OSSD: Software Engineering Education Using an Open Source Software Approach

    ERIC Educational Resources Information Center

    Huang, Kun; Dong, Yifei; Ge, Xun

    2006-01-01

    Computing is a complex, multidisciplinary field that requires a range of professional proficiencies. Computing students are expected to develop in-depth knowledge and skills, integrate and apply their knowledge flexibly to solve complex problems, and work successfully in teams. However, many students who graduate with degrees in computing fail to…

  1. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  2. 40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants for Boat Manufacturing Standards for Open... section to compute the weighted-average MACT model point value for each open molding resin and gel coat...

  3. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  4. Eddylicious: A Python package for turbulent inflow generation

    NASA Astrophysics Data System (ADS)

    Mukha, Timofey; Liefvendahl, Mattias

    2018-01-01

    A Python package for generating inflow for scale-resolving computer simulations of turbulent flow is presented. The purpose of the package is to unite existing inflow generation methods in a single code-base and make them accessible to users of various Computational Fluid Dynamics (CFD) solvers. The currently existing functionality consists of an accurate inflow generation method suitable for flows with a turbulent boundary layer inflow and input/output routines for coupling with the open-source CFD solver OpenFOAM.

  5. Building Computer-Based Experiments in Psychology without Programming Skills.

    PubMed

    Ruisoto, Pablo; Bellido, Alberto; Ruiz, Javier; Juanes, Juan A

    2016-06-01

    Research in Psychology usually requires to build and run experiments. However, although this task has required scripting, recent computer tools based on graphical interfaces offer new opportunities in this field for researchers with non-programming skills. The purpose of this study is to illustrate and provide a comparative overview of two of the main free open source "point and click" software packages for building and running experiments in Psychology: PsychoPy and OpenSesame. Recommendations for their potential use are further discussed.

  6. Mapping urban green open space in Bontang city using QGIS and cloud computing

    NASA Astrophysics Data System (ADS)

    Agus, F.; Ramadiani; Silalahi, W.; Armanda, A.; Kusnandar

    2018-04-01

    Digital mapping techniques are available freely and openly so that map-based application development is easier, faster and cheaper. A rapid development of Cloud Computing Geographic Information System makes this system can help the needs of the community for the provision of geospatial information online. The presence of urban Green Open Space (GOS) provide great benefits as an oxygen supplier, carbon-binding agent and can contribute to providing comfort and beauty of city life. This study aims to propose a platform application of GIS Cloud Computing (CC) of Bontang City GOS mapping. The GIS-CC platform uses the basic map available that’s free and open source. The research used survey method to collect GOS data obtained from Bontang City Government, while application developing works Quantum GIS-CC. The result section describes the existence of GOS Bontang City and the design of GOS mapping application.

  7. From Oss CAD to Bim for Cultural Heritage Digital Representation

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Stylianidis, E.

    2017-02-01

    The paper illustrates the use of open source Computer-aided design (CAD) environments in order to develop Building Information Modelling (BIM) tools able to manage 3D models in the field of cultural heritage. Nowadays, the development of Free and Open Source Software (FOSS) has been rapidly growing and their use tends to be consolidated. Although BIM technology is widely known and used, there is a lack of integrated open source platforms able to support all stages of Historic Building Information Modelling (HBIM) processes. The present research aims to use a FOSS CAD environment in order to develop BIM plug-ins which will be able to import and edit digital representations of cultural heritage models derived by photogrammetric methods.

  8. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    PubMed

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  9. OpenID Connect as a security service in cloud-based medical imaging systems

    PubMed Central

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-01-01

    Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682

  10. Laptops and Inspired Writing

    ERIC Educational Resources Information Center

    Warschauer, Mark; Arada, Kathleen; Zheng, Binbin

    2010-01-01

    Can daily access to laptop computers help students become better writers? Are such programs affordable? Evidence from the Inspired Writing program in Littleton Public Schools, Colorado, USA, provides a resounding yes to both questions. The program employs student netbooks, open-source software, cloud computing, and social media to help students in…

  11. Free LittleDog!: Towards Completely Untethered Operation of the LittleDog Quadruped

    DTIC Science & Technology

    2007-08-01

    helpful Intel Open Source Computer Vision ( OpenCV ) library [4] wherever possible rather than reimplementing many of the standard algorithms, however...correspondences between image points and world points, and feeding these to a camera calibration function, such as that provided by OpenCV , allows one to solve... OpenCV calibration function to that used for intrinsic calibration solves for Tboard→camerai . The position of the camera 37 Figure 5.3: Snapshot of

  12. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  13. Mushu, a free- and open source BCI signal acquisition, written in Python.

    PubMed

    Venthur, Bastian; Blankertz, Benjamin

    2012-01-01

    The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.

  14. DigitalVHI--a freeware open-source software application to capture the Voice Handicap Index and other questionnaire data in various languages.

    PubMed

    Herbst, Christian T; Oh, Jinook; Vydrová, Jitka; Švec, Jan G

    2015-07-01

    In this short report we introduce DigitalVHI, a free open-source software application for obtaining Voice Handicap Index (VHI) and other questionnaire data, which can be put on a computer in clinics and used in clinical practice. The software can simplify performing clinical studies since it makes the VHI scores directly available for analysis in a digital form. It can be downloaded from http://www.christian-herbst.org/DigitalVHI/.

  15. The MyHealthService approach for chronic disease management based on free open source software and low cost components.

    PubMed

    Vognild, Lars K; Burkow, Tatjana M; Luque, Luis Fernandez

    2009-01-01

    In this paper we present an approach to building personal health services, supporting following-up, physical exercising, health education, and psychosocial support for the chronically ill, based on free open source software and low-cost computers, mobile devices, and consumer health and fitness devices. We argue that this will lower the cost of the systems, which is important given the increasing number of people with chronicle diseases and limited healthcare budgets.

  16. Disambiguate: An open-source application for disambiguating two species in next generation sequencing data from grafted samples.

    PubMed

    Ahdesmäki, Miika J; Gray, Simon R; Johnson, Justin H; Lai, Zhongwu

    2016-01-01

    Grafting of cell lines and primary tumours is a crucial step in the drug development process between cell line studies and clinical trials. Disambiguate is a program for computationally separating the sequencing reads of two species derived from grafted samples. Disambiguate operates on DNA or RNA-seq alignments to the two species and separates the components at very high sensitivity and specificity as illustrated in artificially mixed human-mouse samples. This allows for maximum recovery of data from target tumours for more accurate variant calling and gene expression quantification. Given that no general use open source algorithm accessible to the bioinformatics community exists for the purposes of separating the two species data, the proposed Disambiguate tool presents a novel approach and improvement to performing sequence analysis of grafted samples. Both Python and C++ implementations are available and they are integrated into several open and closed source pipelines. Disambiguate is open source and is freely available at https://github.com/AstraZeneca-NGS/disambiguate.

  17. Roadmap for cardiovascular circulation model

    PubMed Central

    Bradley, Christopher P.; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R.; Omholt, Stig W.; Chase, J. Geoffrey; Müller, Lucas O.; Watanabe, Sansuke M.; Blanco, Pablo J.; de Bono, Bernard; Hunter, Peter J.

    2016-01-01

    Abstract Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well‐established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo‐skeletal system. The computational infrastructure for the cardiovascular model should provide for near real‐time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. PMID:27506597

  18. Roadmap for cardiovascular circulation model.

    PubMed

    Safaei, Soroush; Bradley, Christopher P; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R; Omholt, Stig W; Chase, J Geoffrey; Müller, Lucas O; Watanabe, Sansuke M; Blanco, Pablo J; de Bono, Bernard; Hunter, Peter J

    2016-12-01

    Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well-established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo-skeletal system. The computational infrastructure for the cardiovascular model should provide for near real-time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  19. THE CHALLENGE OF QUALITY ASSURANCE FOR EMISSION FLUX MEASUREMENTS OF LARGE AREA SOURCES BY OPTICAL REMOTE SENSING

    EPA Science Inventory

    The paper examines the quality assurance challenges associated with open path Fourier transform infrared (OPFTIR) measurements of large area pollution sources with plume reconstruction by computed tomography (CT) and how each challenge may be met. Traditionally, pollutant concent...

  20. Noninvasive Fetal ECG: the PhysioNet/Computing in Cardiology Challenge 2013.

    PubMed

    Silva, Ikaro; Behar, Joachim; Sameni, Reza; Zhu, Tingting; Oster, Julien; Clifford, Gari D; Moody, George B

    2013-03-01

    The PhysioNet/CinC 2013 Challenge aimed to stimulate rapid development and improvement of software for estimating fetal heart rate (FHR), fetal interbeat intervals (FRR), and fetal QT intervals (FQT), from multichannel recordings made using electrodes placed on the mother's abdomen. For the challenge, five data collections from a variety of sources were used to compile a large standardized database, which was divided into training, open test, and hidden test subsets. Gold-standard fetal QRS and QT interval annotations were developed using a novel crowd-sourcing framework. The challenge organizers used the hidden test subset to evaluate 91 open-source software entries submitted by 53 international teams of participants in three challenge events, estimating FHR, FRR, and FQT using the hidden test subset, which was not available for study by participants. Two additional events required only user-submitted QRS annotations to evaluate FHR and FRR estimation accuracy using the open test subset available to participants. The challenge yielded a total of 91 open-source software entries. The best of these achieved average estimation errors of 187bpm 2 for FHR, 20.9 ms for FRR, and 152.7 ms for FQT. The open data sets, scoring software, and open-source entries are available at PhysioNet for researchers interested on working on these problems.

  1. Factors Influencing F/OSS Cloud Computing Software Product Success: A Quantitative Study

    ERIC Educational Resources Information Center

    Letort, D. Brian

    2012-01-01

    Cloud Computing introduces a new business operational model that allows an organization to shift information technology consumption from traditional capital expenditure to operational expenditure. This shift introduces challenges from both the adoption and creation vantage. This study evaluates factors that influence Free/Open Source Software…

  2. A Flexible, Extensible Online Testing System for Mathematics

    ERIC Educational Resources Information Center

    Passmore, Tim; Brookshaw, Leigh; Butler, Harry

    2011-01-01

    An online testing system developed for entry-skills testing of first-year university students in algebra and calculus is described. The system combines the open-source computer algebra system "Maxima" with computer scripts to parse student answers, which are entered using standard mathematical notation and conventions. The answers can…

  3. An inexpensive Arduino-based LED stimulator system for vision research.

    PubMed

    Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M

    2012-11-15

    Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3). Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Comparison/Validation Study of Lattice Boltzmann and Navier Stokes for Various Benchmark Applications: Report 1 in Discrete Nano-Scale Mechanics and Simulations Series

    DTIC Science & Technology

    2014-09-15

    solver, OpenFOAM version 2.1.‡ In particular, the incompressible laminar flow equations (Eq. 6-8) were solved in conjunction with the pressure im- plicit...central differencing and upwinding schemes, respectively. Since the OpenFOAM code is inherently transient, steady-state conditions were ob- tained...collaborative effort between Kitware and Los Alamos National Laboratory. ‡ OpenFOAM is a free, open-source computational fluid dynamics software developed

  5. Tools for open geospatial science

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Mitasova, H.

    2017-12-01

    Open science uses open source to deal with reproducibility challenges in data and computational sciences. However, just using open source software or making the code public does not make the research reproducible. Moreover, the scientists face the challenge of learning new unfamiliar tools and workflows. In this contribution, we will look at a graduate-level course syllabus covering several software tools which make validation and reuse by a wider professional community possible. For the novices in the open science arena, we will look at how scripting languages such as Python and Bash help us reproduce research (starting with our own work). Jupyter Notebook will be introduced as a code editor, data exploration tool, and a lab notebook. We will see how Git helps us not to get lost in revisions and how Docker is used to wrap all the parts together using a single text file so that figures for a scientific paper or a technical report can be generated with a single command. We will look at examples of software and publications in the geospatial domain which use these tools and principles. Scientific contributions to GRASS GIS, a powerful open source desktop GIS and geoprocessing backend, will serve as an example of why and how to publish new algorithms and tools as part of a bigger open source project.

  6. An Inexpensive, Open-Source USB Arduino Data Acquisition Device for Chemical Instrumentation.

    PubMed

    Grinias, James P; Whitfield, Jason T; Guetschow, Erik D; Kennedy, Robert T

    2016-07-12

    Many research and teaching labs rely on USB data acquisition devices to collect voltage signals from instrumentation. However, these devices can be cost-prohibitive (especially when large numbers are needed for teaching labs) and require software to be developed for operation. In this article, we describe the development and use of an open-source USB data acquisition device (with 16-bit acquisition resolution) built using simple electronic components and an Arduino Uno that costs under $50. Additionally, open-source software written in Python is included so that data can be acquired using nearly any PC or Mac computer with a simple USB connection. Use of the device was demonstrated for a sophomore-level analytical experiment using GC and a CE-UV separation on an instrument used for research purposes.

  7. Applying Open Source Game Engine for Building Visual Simulation Training System of Fire Fighting

    NASA Astrophysics Data System (ADS)

    Yuan, Diping; Jin, Xuesheng; Zhang, Jin; Han, Dong

    There's a growing need for fire departments to adopt a safe and fair method of training to ensure that the firefighting commander is in a position to manage a fire incident. Visual simulation training systems, with their ability to replicate and interact with virtual fire scenarios through the use of computer graphics or VR, become an effective and efficient method for fire ground education. This paper describes the system architecture and functions of a visual simulated training system of fire fighting on oil storage, which adopting Delat3D, a open source game and simulation engine, to provide realistic 3D views. It presents that using open source technology provides not only the commercial-level 3D effects but also a great reduction of cost.

  8. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  9. Gpufit: An open-source toolkit for GPU-accelerated curve fitting.

    PubMed

    Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark

    2017-11-16

    We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.

  10. Drekar v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seefeldt, Ben; Sondak, David; Hensinger, David M.

    Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less

  11. Clinician accessible tools for GUI computational models of transcranial electrical stimulation: BONSAI and SPHERES.

    PubMed

    Truong, Dennis Q; Hüber, Mathias; Xie, Xihe; Datta, Abhishek; Rahman, Asif; Parra, Lucas C; Dmochowski, Jacek P; Bikson, Marom

    2014-01-01

    Computational models of brain current flow during transcranial electrical stimulation (tES), including transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), are increasingly used to understand and optimize clinical trials. We propose that broad dissemination requires a simple graphical user interface (GUI) software that allows users to explore and design montages in real-time, based on their own clinical/experimental experience and objectives. We introduce two complimentary open-source platforms for this purpose: BONSAI and SPHERES. BONSAI is a web (cloud) based application (available at neuralengr.com/bonsai) that can be accessed through any flash-supported browser interface. SPHERES (available at neuralengr.com/spheres) is a stand-alone GUI application that allow consideration of arbitrary montages on a concentric sphere model by leveraging an analytical solution. These open-source tES modeling platforms are designed go be upgraded and enhanced. Trade-offs between open-access approaches that balance ease of access, speed, and flexibility are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Design and Implementation of a Modern Automatic Deformation Monitoring System

    NASA Astrophysics Data System (ADS)

    Engel, Philipp; Schweimler, Björn

    2016-03-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  13. PyPedia: using the wiki paradigm as crowd sourcing environment for bioinformatics protocols.

    PubMed

    Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A

    2015-01-01

    Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.

  14. Teaching Introductory GIS Programming to Geographers Using an Open Source Python Approach

    ERIC Educational Resources Information Center

    Etherington, Thomas R.

    2016-01-01

    Computer programming is not commonly taught to geographers as a part of geographic information system (GIS) courses, but the advent of NeoGeography, big data and open GIS means that programming skills are becoming more important. To encourage the teaching of programming to geographers, this paper outlines a course based around a series of…

  15. Synthetic Biology: Knowledge Accessed by Everyone (Open Sources)

    ERIC Educational Resources Information Center

    Sánchez Reyes, Patricia Margarita

    2016-01-01

    Using the principles of biology, along with engineering and with the help of computer, scientists manage to copy. DNA sequences from nature and use them to create new organisms. DNA is created through engineering and computer science managing to create life inside a laboratory. We cannot dismiss the role that synthetic biology could lead in…

  16. Development of Web-Based Examination System Using Open Source Programming Model

    ERIC Educational Resources Information Center

    Abass, Olalere A.; Olajide, Samuel A.; Samuel, Babafemi O.

    2017-01-01

    The traditional method of assessment (examination) is often characterized by examination questions leakages, human errors during marking of scripts and recording of scores. The technological advancement in the field of computer science has necessitated the need for computer usage in majorly all areas of human life and endeavors, education sector…

  17. The discounting model selector: Statistical software for delay discounting applications.

    PubMed

    Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A

    2017-05-01

    Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.

  18. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    NASA Astrophysics Data System (ADS)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  19. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  20. Simulating the Gradually Deteriorating Performance of an RTG

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry

    2008-01-01

    Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, thismore » design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.« less

  2. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    PubMed

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  3. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome

    PubMed Central

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232

  4. ELATE: an open-source online application for analysis and visualization of elastic tensors

    NASA Astrophysics Data System (ADS)

    Gaillac, Romain; Pullumbi, Pluton; Coudert, François-Xavier

    2016-07-01

    We report on the implementation of a tool for the analysis of second-order elastic stiffness tensors, provided with both an open-source Python module and a standalone online application allowing the visualization of anisotropic mechanical properties. After describing the software features, how we compute the conventional elastic constants and how we represent them graphically, we explain our technical choices for the implementation. In particular, we focus on why a Python module is used to generate the HTML web page with embedded Javascript for dynamical plots.

  5. Review on open source operating systems for internet of things

    NASA Astrophysics Data System (ADS)

    Wang, Zhengmin; Li, Wei; Dong, Huiliang

    2017-08-01

    Internet of Things (IoT) is an environment in which everywhere and every device became smart in a smart world. Internet of Things is growing vastly; it is an integrated system of uniquely identifiable communicating devices which exchange information in a connected network to provide extensive services. IoT devices have very limited memory, computational power, and power supply. Traditional operating systems (OS) have no way to meet the needs of IoT systems. In this paper, we thus analyze the challenges of IoT OS and survey applicable open source OSs.

  6. An open-source job management framework for parameter-space exploration: OACIS

    NASA Astrophysics Data System (ADS)

    Murase, Y.; Uchitane, T.; Ito, N.

    2017-11-01

    We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.

  7. SuML: A Survey Markup Language for Generalized Survey Encoding

    PubMed Central

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  8. EMUstack: An open source route to insightful electromagnetic computation via the Bloch mode scattering matrix method

    NASA Astrophysics Data System (ADS)

    Sturmberg, Björn C. P.; Dossou, Kokou B.; Lawrence, Felix J.; Poulton, Christopher G.; McPhedran, Ross C.; Martijn de Sterke, C.; Botten, Lindsay C.

    2016-05-01

    We describe EMUstack, an open-source implementation of the Scattering Matrix Method (SMM) for solving field problems in layered media. The fields inside nanostructured layers are described in terms of Bloch modes that are found using the Finite Element Method (FEM). Direct access to these modes allows the physical intuition of thin film optics to be extended to complex structures. The combination of the SMM and the FEM makes EMUstack ideally suited for studying lossy, high-index contrast structures, which challenge conventional SMMs.

  9. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  10. Isolated Open Rotor Noise Prediction Assessment Using the F31A31 Historical Blade Set

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, William T.; Boyd, D. Douglas, Jr.; Zawodny, Nikolas S.

    2016-01-01

    In an effort to mitigate next-generation fuel efficiency and environmental impact concerns for aviation, open rotor propulsion systems have received renewed interest. However, maintaining the high propulsive efficiency while simultaneously meeting noise goals has been one of the challenges in making open rotor propulsion a viable option. Improvements in prediction tools and design methodologies have opened the design space for next generation open rotor designs that satisfy these challenging objectives. As such, validation of aerodynamic and acoustic prediction tools has been an important aspect of open rotor research efforts. This paper describes validation efforts of a combined computational fluid dynamics and Ffowcs Williams and Hawkings equation methodology for open rotor aeroacoustic modeling. Performance and acoustic predictions were made for a benchmark open rotor blade set and compared with measurements over a range of rotor speeds and observer angles. Overall, the results indicate that the computational approach is acceptable for assessing low-noise open rotor designs. Additionally, this approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.

  11. An open source device for operant licking in rats.

    PubMed

    Longley, Matthew; Willis, Ethan L; Tay, Cindy X; Chen, Hao

    2017-01-01

    We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior.

  12. An open source device for operant licking in rats

    PubMed Central

    Longley, Matthew; Willis, Ethan L.; Tay, Cindy X.

    2017-01-01

    We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior. PMID:28229020

  13. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  14. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  15. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  16. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  17. OpenNFT: An open-source Python/Matlab framework for real-time fMRI neurofeedback training based on activity, connectivity and multivariate pattern analysis.

    PubMed

    Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van

    2017-08-01

    Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Completeness and overlap in open access systems: Search engines, aggregate institutional repositories and physics-related open sources.

    PubMed

    Tsay, Ming-Yueh; Wu, Tai-Luan; Tseng, Ling-Li

    2017-01-01

    This study examines the completeness and overlap of coverage in physics of six open access scholarly communication systems, including two search engines (Google Scholar and Microsoft Academic), two aggregate institutional repositories (OAIster and OpenDOAR), and two physics-related open sources (arXiv.org and Astrophysics Data System). The 2001-2013 Nobel Laureates in Physics served as the sample. Bibliographic records of their publications were retrieved and downloaded from each system, and a computer program was developed to perform the analytical tasks of sorting, comparison, elimination, aggregation and statistical calculations. Quantitative analyses and cross-referencing were performed to determine the completeness and overlap of the system coverage of the six open access systems. The results may enable scholars to select an appropriate open access system as an efficient scholarly communication channel, and academic institutions may build institutional repositories or independently create citation index systems in the future. Suggestions on indicators and tools for academic assessment are presented based on the comprehensiveness assessment of each system.

  19. Method and apparatus for enhanced sensitivity filmless medical x-ray imaging, including three-dimensional imaging

    DOEpatents

    Parker, S.

    1995-10-24

    A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z{sub 1} above upper collimator plane, distance z{sub 2} above the lower collimator plane, and distance z{sub 3} above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v{sub 1}, v{sub 2}, v{sub 3} proportional to z{sub 1}, z{sub 2} and z{sub 3}, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site. 5 figs.

  20. Method and apparatus for enhanced sensitivity filmless medical x-ray imaging, including three-dimensional imaging

    DOEpatents

    Parker, Sherwood

    1995-01-01

    A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z.sub.1 above upper collimator plane, distance z.sub.2 above the lower collimator plane, and distance z.sub.3 above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v.sub.1, v.sub.2, v.sub.3 proportional to z.sub.1, z.sub.2 and z.sub.3, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site.

  1. BioSig: The Free and Open Source Software Library for Biomedical Signal Processing

    PubMed Central

    Vidaurre, Carmen; Sander, Tilmann H.; Schlögl, Alois

    2011-01-01

    BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals. PMID:21437227

  2. BioSig: the free and open source software library for biomedical signal processing.

    PubMed

    Vidaurre, Carmen; Sander, Tilmann H; Schlögl, Alois

    2011-01-01

    BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals.

  3. Open-Source Integrated Design-Analysis Environment For Nuclear Energy Advanced Modeling & Simulation Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The framework created through the Open-Source Integrated Design-Analysis Environment (IDAE) for Nuclear Energy Advanced Modeling & Simulation grant has simplify and democratize advanced modeling and simulation in the nuclear energy industry that works on a range of nuclear engineering applications. It leverages millions of investment dollars from the Department of Energy's Office of Nuclear Energy for modeling and simulation of light water reactors and the Office of Nuclear Energy's research and development. The IDEA framework enhanced Kitware’s Computational Model Builder (CMB) while leveraging existing open-source toolkits and creating a graphical end-to-end umbrella guiding end-users and developers through the nuclear energymore » advanced modeling and simulation lifecycle. In addition, the work deliver strategic advancements in meshing and visualization for ensembles.« less

  4. Whole earth modeling: developing and disseminating scientific software for computational geophysics.

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    Historically, a great deal of specialized scientific software for modeling and data analysis has been developed by individual researchers or small groups of scientists working on their own specific research problems. As the magnitude of available data and computer power has increased, so has the complexity of scientific problems addressed by computational methods, creating both a need to sustain existing scientific software, and expand its development to take advantage of new algorithms, new software approaches, and new computational hardware. To that end, communities like the Computational Infrastructure for Geodynamics (CIG) have been established to support the use of best practices in scientific computing for solid earth geophysics research and teaching. Working as a scientific community enables computational geophysicists to take advantage of technological developments, improve the accuracy and performance of software, build on prior software development, and collaborate more readily. The CIG community, and others, have adopted an open-source development model, in which code is developed and disseminated by the community in an open fashion, using version control and software repositories like Git. One emerging issue is how to adequately identify and credit the intellectual contributions involved in creating open source scientific software. The traditional method of disseminating scientific ideas, peer reviewed publication, was not designed for review or crediting scientific software, although emerging publication strategies such software journals are attempting to address the need. We are piloting an integrated approach in which authors are identified and credited as scientific software is developed and run. Successful software citation requires integration with the scholarly publication and indexing mechanisms as well, to assign credit, ensure discoverability, and provide provenance for software.

  5. Enabling systematic, harmonised and large-scale biofilms data computation: the Biofilms Experiment Workbench.

    PubMed

    Pérez-Rodríguez, Gael; Glez-Peña, Daniel; Azevedo, Nuno F; Pereira, Maria Olívia; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-03-01

    Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    PubMed

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.

  7. WImpiBLAST: Web Interface for mpiBLAST to Help Biologists Perform Large-Scale Annotation Using High Performance Computing

    PubMed Central

    Sharma, Parichit; Mantri, Shrikant S.

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410

  8. The Neurona at Home project: Simulating a large-scale cellular automata brain in a distributed computing environment

    NASA Astrophysics Data System (ADS)

    Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.

    2013-01-01

    The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.

  9. Innovation in engineering education through computer assisted learning and virtual university model

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Raicu, G.

    2015-11-01

    The paper presents the most important aspects of innovation in Engineering Education using Computer Assisted Learning. The authors propose to increase the quality of Engineering Education programs of study at European standards. The use of computer assisted learning methodologies in all studies is becoming an important resource in Higher Education. We intend to improve the concept of e-Learning using virtual terminals, online support and assisting special training through live seminars and interactive labs to develop a virtual university model. We intend to encourage computer assisted learning and innovation as sources of competitive advantage, to permit vision and learning analysis, identifies new sources of technology and ideas. Our work is based on our university datasets collected during last fifteen years using several e-Learning systems. In Constanta Maritime University (CMU), using eLearning and Knowledge Management Services (KMS) is very important and we apply it effectively to achieve strategic objectives, such as collaboration, sharing and good practice. We have experience in this field since 2000 year using Moodle as KMS in our university. The term KMS can be associated to Open Source Software, Open Standards, Open Protocols and Open Knowledge licenses, initiatives and policies. In CMU Virtual Campus we have today over 12500 active users. Another experience of the authors is the implementation of MariTrainer Wiki educational platform based on Dokeos and DekiWiki under MARICOMP and MEP Leonardo da Vinci Project. We'll also present in this paper a case study under EU funded project POSDRU, where the authors implemented other educational platform in Technological High Schools from Romania used over 1000 teachers. Based on large datasets the study tries to improve the concept of e-Learning teaching using the revolutionary technologies. The new concept present in this paper is that the teaching and learning will be interactive and live. The new and modern techniques are the flexible learning courses, the production of learning demonstrators and testing. All the information from the virtual educational platform remain open space, communication between participants and continued after graduation, so we can talk about creating and maintaining a community of graduates, a partnership with them. Every European University must have a department which aims to provide computer assisted learning using knowledge creation through learning, capture and explication, sharing and collaborative communication, access, use and reuse and knowledge archiving.

  10. Building Energy Management Open Source Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is the repository for Building Energy Management Open Source Software (BEMOSS), which is an open source operating system that is engineered to improve sensing and control of equipment in small- and medium-sized commercial buildings. BEMOSS offers the following key features: (1) Open source, open architecture – BEMOSS is an open source operating system that is built upon VOLTTRON – a distributed agent platform developed by Pacific Northwest National Laboratory (PNNL). BEMOSS was designed to make it easy for hardware manufacturers to seamlessly interface their devices with BEMOSS. Software developers can also contribute to adding additional BEMOSS functionalities and applications.more » (2) Plug & play – BEMOSS was designed to automatically discover supported load controllers (including smart thermostats, VAV/RTUs, lighting load controllers and plug load controllers) in commercial buildings. (3) Interoperability – BEMOSS was designed to work with load control devices form different manufacturers that operate on different communication technologies and data exchange protocols. (4) Cost effectiveness – Implementation of BEMOSS deemed to be cost-effective as it was built upon a robust open source platform that can operate on a low-cost single-board computer, such as Odroid. This feature could contribute to its rapid deployment in small- or medium-sized commercial buildings. (5) Scalability and ease of deployment – With its multi-node architecture, BEMOSS provides a distributed architecture where load controllers in a multi-floor and high occupancy building could be monitored and controlled by multiple single-board computers hosting BEMOSS. This makes it possible for a building engineer to deploy BEMOSS in one zone of a building, be comfortable with its operation, and later on expand the deployment to the entire building to make it more energy efficient. (6) Ability to provide local and remote monitoring – BEMOSS provides both local and remote monitoring ability with role-based access control. (7) Security – In addition to built-in security features provided by VOLTTRON, BEMOSS provides enhanced security features, including BEMOSS discovery approval process, encrypted core-to-node communication, thermostat anti-tampering feature and many more. (8) Support from the Advisory Committee – BEMOSS was developed in consultation with an advisory committee from the beginning of the project. BEMOSS advisory committee comprises representatives from 22 organizations from government and industry.« less

  11. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  12. LINUX, Virtualization, and the Cloud: A Hands-On Student Introductory Lab

    ERIC Educational Resources Information Center

    Serapiglia, Anthony

    2013-01-01

    Many students are entering Computer Science education with limited exposure to operating systems and applications other than those produced by Apple or Microsoft. This gap in familiarity with the Open Source community can quickly be bridged with a simple exercise that can also be used to strengthen two other important current computing concepts,…

  13. Quantitative computed tomography (QCT) as a radiology reporting tool by using optical character recognition (OCR) and macro program.

    PubMed

    Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2012-12-01

    The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.

  14. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  15. eMZed: an open source framework in Python for rapid and interactive development of LC/MS data analysis workflows.

    PubMed

    Kiefer, Patrick; Schmitt, Uwe; Vorholt, Julia A

    2013-04-01

    The Python-based, open-source eMZed framework was developed for mass spectrometry (MS) users to create tailored workflows for liquid chromatography (LC)/MS data analysis. The goal was to establish a unique framework with comprehensive basic functionalities that are easy to apply and allow for the extension and modification of the framework in a straightforward manner. eMZed supports the iterative development and prototyping of individual evaluation strategies by providing a computing environment and tools for inspecting and modifying underlying LC/MS data. The framework specifically addresses non-expert programmers, as it requires only basic knowledge of Python and relies largely on existing successful open-source software, e.g. OpenMS. The framework eMZed and its documentation are freely available at http://emzed.biol.ethz.ch/. eMZed is published under the GPL 3.0 license, and an online discussion group is available at https://groups.google.com/group/emzed-users. Supplementary data are available at Bioinformatics online.

  16. Development of a web application for water resources based on open source software

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  17. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    PubMed Central

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  18. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  19. Executable research compendia in geoscience research infrastructures

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with existing platforms for display and control These integrations are vital for capturing workflows in RIs and connect key stakeholders (scientists, publishers, librarians). They are demonstrated using developments by the DFG-funded project Opening Reproducible Research (http://o2r.info). Semi-automatic creation of ERCs based on research workflows is a core goal of the project. References [0] Tony Hey, Stewart Tansley, Kristin Tolle (eds), 2009. The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research. [1] P. Martin et al., Open Information Linking for Environmental Research Infrastructures, 2015 IEEE 11th International Conference on e-Science, Munich, 2015, pp. 513-520. doi: 10.1109/eScience.2015.66 [2] Y. Chen et al., Analysis of Common Requirements for Environmental Science Research Infrastructures, The International Symposium on Grids and Clouds (ISGC) 2013, Taipei, 2013, http://pos.sissa.it/archive/conferences/179/032/ISGC [3] Opening Reproducible Research, Geophysical Research Abstracts Vol. 18, EGU2016-7396, 2016, http://meetingorganizer.copernicus.org/EGU2016/EGU2016-7396.pdf

  20. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  1. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  2. Open Ephys electroencephalography (Open Ephys  +  EEG): a modular, low-cost, open-source solution to human neural recording

    NASA Astrophysics Data System (ADS)

    Black, Christopher; Voigts, Jakob; Agrawal, Uday; Ladow, Max; Santoyo, Juan; Moore, Christopher; Jones, Stephanie

    2017-06-01

    Objective. Electroencephalography (EEG) offers a unique opportunity to study human neural activity non-invasively with millisecond resolution using minimal equipment in or outside of a lab setting. EEG can be combined with a number of techniques for closed-loop experiments, where external devices are driven by specific neural signals. However, reliable, commercially available EEG systems are expensive, often making them impractical for individual use and research development. Moreover, by design, a majority of these systems cannot be easily altered to the specification needed by the end user. We focused on mitigating these issues by implementing open-source tools to develop a new EEG platform to drive down research costs and promote collaboration and innovation. Approach. Here, we present methods to expand the open-source electrophysiology system, Open Ephys (www.openephys.org), to include human EEG recordings. We describe the equipment and protocol necessary to interface various EEG caps with the Open Ephys acquisition board, and detail methods for processing data. We present applications of Open Ephys  +  EEG as a research tool and discuss how this innovative EEG technology lays a framework for improved closed-loop paradigms and novel brain-computer interface experiments. Main results. The Open Ephys  +  EEG system can record reliable human EEG data, as well as human EMG data. A side-by-side comparison of eyes closed 8-14 Hz activity between the Open Ephys  +  EEG system and the Brainvision ActiCHamp EEG system showed similar average power and signal to noise. Significance. Open Ephys  +  EEG enables users to acquire high-quality human EEG data comparable to that of commercially available systems, while maintaining the price point and extensibility inherent to open-source systems.

  3. Open Ephys electroencephalography (Open Ephys  +  EEG): a modular, low-cost, open-source solution to human neural recording.

    PubMed

    Black, Christopher; Voigts, Jakob; Agrawal, Uday; Ladow, Max; Santoyo, Juan; Moore, Christopher; Jones, Stephanie

    2017-06-01

    Electroencephalography (EEG) offers a unique opportunity to study human neural activity non-invasively with millisecond resolution using minimal equipment in or outside of a lab setting. EEG can be combined with a number of techniques for closed-loop experiments, where external devices are driven by specific neural signals. However, reliable, commercially available EEG systems are expensive, often making them impractical for individual use and research development. Moreover, by design, a majority of these systems cannot be easily altered to the specification needed by the end user. We focused on mitigating these issues by implementing open-source tools to develop a new EEG platform to drive down research costs and promote collaboration and innovation. Here, we present methods to expand the open-source electrophysiology system, Open Ephys (www.openephys.org), to include human EEG recordings. We describe the equipment and protocol necessary to interface various EEG caps with the Open Ephys acquisition board, and detail methods for processing data. We present applications of Open Ephys  +  EEG as a research tool and discuss how this innovative EEG technology lays a framework for improved closed-loop paradigms and novel brain-computer interface experiments. The Open Ephys  +  EEG system can record reliable human EEG data, as well as human EMG data. A side-by-side comparison of eyes closed 8-14 Hz activity between the Open Ephys  +  EEG system and the Brainvision ActiCHamp EEG system showed similar average power and signal to noise. Open Ephys  +  EEG enables users to acquire high-quality human EEG data comparable to that of commercially available systems, while maintaining the price point and extensibility inherent to open-source systems.

  4. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.

    PubMed

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-01

    To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.

  5. GiPSi:a framework for open source/open architecture software development for organ-level surgical simulation.

    PubMed

    Cavuşoğlu, M Cenk; Göktekin, Tolga G; Tendick, Frank

    2006-04-01

    This paper presents the architectural details of an evolving open source/open architecture software framework for developing organ-level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an application programming interface for interfacing dynamic models defined over spatial domains. It is specifically designed to be independent of the specifics of the modeling methods used, and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the model developer to choose the most natural geometric representation for each case. Input/output interfaces for visualization and haptics for real-time interactive applications have also been provided.

  6. SU-G-IeP4-04: DD-Neutron Source Collimation for Neutron Stimulated Emission Computed Tomography: A Monte Carlo Simulation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, G; Kapadia, A

    Purpose: To optimize collimation and shielding for a deuterium-deuterium (DD) neutron generator for an inexpensive and compact clinical neutron imaging system. The envisioned application is cancer diagnosis through Neutron Stimulated Emission Computed Tomography (NSECT). Methods: Collimator designs were tested with an isotropic 2.5 MeV neutron source through GEANT4 simulations. The collimator is a 52×52×52 cm{sup 3} polyethylene block coupled with a 1 cm lead sheet in sequence. Composite opening was modeled into the collimator to permit passage of neutrons. The opening varied in shape (cylindrical vs. tapered), size (1–5 cm source-side and target-side openings) and aperture placements (13–39 cm frommore » source-side). Spatial and energy distribution of neutrons and gammas were tracked from each collimator design. Parameters analyzed were primary beam width (FWHM), divergence, and efficiency (percent transmission) for different configurations of the collimator. Select resultant outputs were then used for simulated NSECT imaging of a virtual breast phantom containing a 2.5 cm diameter tumor to assess the effect of the collimator on spatial resolution, noise, and scan time. Finally, composite shielding enclosure made of polyethylene and lead was designed and evaluated to block 99.99% of neutron and gamma radiation generated in the system. Results: Analysis of primary beam indicated the beam-width is linear to the aperture size. Increasing source-side opening allowed at least 20% more neutron throughput for all designs relative to the cylindrical openings. Maximum throughput for all designs was 364% relative to cylindrical openings. Conclusion: The work indicates potential for collimating and shielding a DD neutron generator for use in a clinical NSECT system. The proposed collimator designs produced a well-defined collimated neutron beam that can be used to image samples of interest with millimeter resolution. Balance in output efficiency, noise reduction, and scan time should be considered to determine the optimal design for specific NSECT applications.« less

  7. Open Babel: An open chemical toolbox

    PubMed Central

    2011-01-01

    Background A frequent problem in computational modeling is the interconversion of chemical structures between different formats. While standard interchange formats exist (for example, Chemical Markup Language) and de facto standards have arisen (for example, SMILES format), the need to interconvert formats is a continuing problem due to the multitude of different application areas for chemistry data, differences in the data stored by different formats (0D versus 3D, for example), and competition between software along with a lack of vendor-neutral formats. Results We discuss, for the first time, Open Babel, an open-source chemical toolbox that speaks the many languages of chemical data. Open Babel version 2.3 interconverts over 110 formats. The need to represent such a wide variety of chemical and molecular data requires a library that implements a wide range of cheminformatics algorithms, from partial charge assignment and aromaticity detection, to bond order perception and canonicalization. We detail the implementation of Open Babel, describe key advances in the 2.3 release, and outline a variety of uses both in terms of software products and scientific research, including applications far beyond simple format interconversion. Conclusions Open Babel presents a solution to the proliferation of multiple chemical file formats. In addition, it provides a variety of useful utilities from conformer searching and 2D depiction, to filtering, batch conversion, and substructure and similarity searching. For developers, it can be used as a programming library to handle chemical data in areas such as organic chemistry, drug design, materials science, and computational chemistry. It is freely available under an open-source license from http://openbabel.org. PMID:21982300

  8. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less

  9. A Standard Platform for Testing and Comparison of MDAO Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.

    2012-01-01

    The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.

  10. Geowall: Investigations into low-cost stereo display technologies

    USGS Publications Warehouse

    Steinwand, Daniel R.; Davis, Brian; Weeks, Nathan

    2003-01-01

    Recently, the combination of new projection technology, fast, low-cost graphics cards, and Linux-powered personal computers has made it possible to provide a stereoprojection and stereoviewing system that is much more affordable than previous commercial solutions. These Geowall systems are low-cost visualization systems built with commodity off-the-shelf components, run on open-source (and other) operating systems, and using open-source applications software. In short, they are ?Beowulf-class? visualization systems that provide a cost-effective way for the U. S. Geological Survey to broaden participation in the visualization community and view stereoimagery and three-dimensional models2.

  11. Sirepo - Warp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less

  12. High Resolution Topography of Polar Regions from Commercial Satellite Imagery, Petascale Computing and Open Source Software

    NASA Astrophysics Data System (ADS)

    Morin, Paul; Porter, Claire; Cloutier, Michael; Howat, Ian; Noh, Myoung-Jong; Willis, Michael; Kramer, WIlliam; Bauer, Greg; Bates, Brian; Williamson, Cathleen

    2017-04-01

    Surface topography is among the most fundamental data sets for geosciences, essential for disciplines ranging from glaciology to geodynamics. Two new projects are using sub-meter, commercial imagery licensed by the National Geospatial-Intelligence Agency and open source photogrammetry software to produce a time-tagged 2m posting elevation model of the Arctic and an 8m posting reference elevation model for the Antarctic. When complete, this publically available data will be at higher resolution than any elevation models that cover the entirety of the Western United States. These two polar projects are made possible due to three equally important factors: 1) open-source photogrammetry software, 2) petascale computing, and 3) sub-meter imagery licensed to the United States Government. Our talk will detail the technical challenges of using automated photogrammetry software; the rapid workflow evolution to allow DEM production; the task of deploying the workflow on one of the world's largest supercomputers; the trials of moving massive amounts of data, and the management strategies the team needed to solve in order to meet deadlines. Finally, we will discuss the implications of this type of collaboration for future multi-team use of leadership-class systems such as Blue Waters, and for further elevation mapping.

  13. Open Source Cloud-Based Technologies for Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Valari, E.; Stylianidis, E.

    2018-05-01

    This paper presents a Cloud-based open source system for storing and processing data from a 3D survey approach. More specifically, we provide an online service for viewing, storing and analysing BIM. Cloud technologies were used to develop a web interface as a BIM data centre, which can handle large BIM data using a server. The server can be accessed by many users through various electronic devices anytime and anywhere so they can view online 3D models using browsers. Nowadays, the Cloud computing is engaged progressively in facilitating BIM-based collaboration between the multiple stakeholders and disciplinary groups for complicated Architectural, Engineering and Construction (AEC) projects. Besides, the development of Open Source Software (OSS) has been rapidly growing and their use tends to be united. Although BIM and Cloud technologies are extensively known and used, there is a lack of integrated open source Cloud-based platforms able to support all stages of BIM processes. The present research aims to create an open source Cloud-based BIM system that is able to handle geospatial data. In this effort, only open source tools will be used; from the starting point of creating the 3D model with FreeCAD to its online presentation through BIMserver. Python plug-ins will be developed to link the two software which will be distributed and freely available to a large community of professional for their use. The research work will be completed by benchmarking four Cloud-based BIM systems: Autodesk BIM 360, BIMserver, Graphisoft BIMcloud and Onuma System, which present remarkable results.

  14. A Web-Based Computer-Aided Learning Module for an Anatomy Course Using Open Source Image Mapping Software

    ERIC Educational Resources Information Center

    Carleton, Renee E.

    2012-01-01

    Computer-aided learning (CAL) is used increasingly to teach anatomy in post-secondary programs. Studies show that augmentation of traditional cadaver dissection and model examination by CAL can be associated with positive student learning outcomes. In order to reduce costs associated with the purchase of skeletons and models and to encourage study…

  15. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  16. Computers in Libraries, 2000: Proceedings (15th, Washington, D.C., March 15-17, 2000).

    ERIC Educational Resources Information Center

    Nixon, Carol, Comp.; Burmood, Jennifer, Comp.

    Topics of the Proceedings of the 15th Annual Computers in Libraries Conference (March 15-17, 2000) include: Linux and open source software in an academic library; a Master Trainer Program; what educators need to know about multimedia and copyright; how super searchers find business information online; managing print costs; new technologies in wide…

  17. Three-Dimensional Printing of X-Ray Computed Tomography Datasets with Multiple Materials Using Open-Source Data Processing

    ERIC Educational Resources Information Center

    Sander, Ian M.; McGoldrick, Matthew T.; Helms, My N.; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W. Matthew

    2017-01-01

    Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing…

  18. Linux Makes the Grade: An Open Source Solution That's Time Has Come

    ERIC Educational Resources Information Center

    Houston, Melissa

    2007-01-01

    In 2001, Indiana officials at the Department of Education were taking stock. The schools had an excellent network infrastructure and had installed significant numbers of computers for 1 million public school enrollees. Yet students were spending less than an hour a week on the computer. It was then that state officials knew each student needed a…

  19. Making the Switch to Open Source Software

    ERIC Educational Resources Information Center

    Surran, Michael

    2003-01-01

    During the 2001-2002 school year the author was struck with the reality that their computer lab would not meet the demands of their school for another year. Greater Houlton Christian Academy (www.ghca.com) is a private school in Maine, and thus does not have access to state or federal funding. This meant that financing a new computer lab would be…

  20. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    ERIC Educational Resources Information Center

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  1. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  2. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  3. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  4. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    NASA Astrophysics Data System (ADS)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  5. Open Source Drug Discovery in Practice: A Case Study

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Background Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. Methodology/Principal Findings A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Conclusions/Significance Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality research at low cost. The critical success factors appear to be clearly defined entry points, transparency and funding to cover core material costs. PMID:23029588

  6. Python Open source Waveform ExtractoR (POWER): an open source, Python package to monitor and post-process numerical relativity simulations

    NASA Astrophysics Data System (ADS)

    Johnson, Daniel; Huerta, E. A.; Haas, Roland

    2018-01-01

    Numerical simulations of Einstein’s field equations provide unique insights into the physics of compact objects moving at relativistic speeds, and which are driven by strong gravitational interactions. Numerical relativity has played a key role to firmly establish gravitational wave astrophysics as a new field of research, and it is now paving the way to establish whether gravitational wave radiation emitted from compact binary mergers is accompanied by electromagnetic and astro-particle counterparts. As numerical relativity continues to blend in with routine gravitational wave data analyses to validate the discovery of gravitational wave events, it is essential to develop open source tools to streamline these studies. Motivated by our own experience as users and developers of the open source, community software, the Einstein Toolkit, we present an open source, Python package that is ideally suited to monitor and post-process the data products of numerical relativity simulations, and compute the gravitational wave strain at future null infinity in high performance environments. We showcase the application of this new package to post-process a large numerical relativity catalog and extract higher-order waveform modes from numerical relativity simulations of eccentric binary black hole mergers and neutron star mergers. This new software fills a critical void in the arsenal of tools provided by the Einstein Toolkit consortium to the numerical relativity community.

  7. Open source Matrix Product States: Opening ways to simulate entangled many-body quantum systems in one dimension

    NASA Astrophysics Data System (ADS)

    Jaschke, Daniel; Wall, Michael L.; Carr, Lincoln D.

    2018-04-01

    Numerical simulations are a powerful tool to study quantum systems beyond exactly solvable systems lacking an analytic expression. For one-dimensional entangled quantum systems, tensor network methods, amongst them Matrix Product States (MPSs), have attracted interest from different fields of quantum physics ranging from solid state systems to quantum simulators and quantum computing. Our open source MPS code provides the community with a toolset to analyze the statics and dynamics of one-dimensional quantum systems. Here, we present our open source library, Open Source Matrix Product States (OSMPS), of MPS methods implemented in Python and Fortran2003. The library includes tools for ground state calculation and excited states via the variational ansatz. We also support ground states for infinite systems with translational invariance. Dynamics are simulated with different algorithms, including three algorithms with support for long-range interactions. Convenient features include built-in support for fermionic systems and number conservation with rotational U(1) and discrete Z2 symmetries for finite systems, as well as data parallelism with MPI. We explain the principles and techniques used in this library along with examples of how to efficiently use the general interfaces to analyze the Ising and Bose-Hubbard models. This description includes the preparation of simulations as well as dispatching and post-processing of them.

  8. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  9. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  10. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  11. Completeness and overlap in open access systems: Search engines, aggregate institutional repositories and physics-related open sources

    PubMed Central

    Wu, Tai-luan; Tseng, Ling-li

    2017-01-01

    This study examines the completeness and overlap of coverage in physics of six open access scholarly communication systems, including two search engines (Google Scholar and Microsoft Academic), two aggregate institutional repositories (OAIster and OpenDOAR), and two physics-related open sources (arXiv.org and Astrophysics Data System). The 2001–2013 Nobel Laureates in Physics served as the sample. Bibliographic records of their publications were retrieved and downloaded from each system, and a computer program was developed to perform the analytical tasks of sorting, comparison, elimination, aggregation and statistical calculations. Quantitative analyses and cross-referencing were performed to determine the completeness and overlap of the system coverage of the six open access systems. The results may enable scholars to select an appropriate open access system as an efficient scholarly communication channel, and academic institutions may build institutional repositories or independently create citation index systems in the future. Suggestions on indicators and tools for academic assessment are presented based on the comprehensiveness assessment of each system. PMID:29267327

  12. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  14. Use of the initial trauma CT scan to aid in diagnosis of open pelvic fractures.

    PubMed

    Scolaro, John A; Wilson, David J; Routt, Milton Lee Chip; Firoozabadi, Reza

    2015-10-01

    Open pelvic disruptions represent high-energy injuries. The prompt identification and management of these injuries decreases their associated morbidity and mortality. Computed tomography (CT) scans are routinely obtained in the initial evaluation of patients with pelvic injuries. The purpose of this study is to identify the incidence and source of air densities noted on computed tomography (CT) scans of the abdominal and pelvic region in patients with pelvic fractures and evaluate the use of initial CT imaging as an adjunctive diagnostic tool to identify open injuries. A retrospective review of a prospectively collected database was performed at a single institution. Seven hundred and twenty-two consecutive patients with a pelvic disruption over a two-year period were included. Review of initial injury CT scans was performed using bone and lung viewing algorithms to identify the presence of extra-luminal air. The primary outcome was the presence, location and source of air identified on pre-operative CT scans. Secondary measurements were identification of air by plain radiograph and correlation between identified air densities on CT and clinically diagnosed open pelvic fractures. Ninety-eight patients were identified as having extra-luminal air densities on CT scans. Eighty-one patients were included in the final analysis following application of inclusion and exclusion criteria. Air was noted by the radiologist in forty-five (55.6%) instances. Six patients (7.4%) were clinically diagnosed with an open pelvic ring disruption; in two patients (2.4%) this diagnosis was delayed. In all patients, the CT was able to track air from its origin. In patients with pelvic disruptions, the injury CT should also be evaluated for the presence and source of extra-luminal air. In some patients, this finding may represent an open pelvic ring disruption. A complete physical exam and CT evaluation should be used to decrease the missed or delayed diagnosis of an open pelvic ring injury. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. MOSES: A Matlab-based open-source stochastic epidemic simulator.

    PubMed

    Varol, Huseyin Atakan

    2016-08-01

    This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.

  16. Using soft-hard fusion for misinformation detection and pattern of life analysis in OSINT

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Shabarekh, Charlotte

    2017-05-01

    Today's battlefields are shifting to "denied areas", where the use of U.S. Military air and ground assets is limited. To succeed, the U.S. intelligence analysts increasingly rely on available open-source intelligence (OSINT) which is fraught with inconsistencies, biased reporting and fake news. Analysts need automated tools for retrieval of information from OSINT sources, and these solutions must identify and resolve conflicting and deceptive information. In this paper, we present a misinformation detection model (MDM) which converts text to attributed knowledge graphs and runs graph-based analytics to identify misinformation. At the core of our solution is identification of knowledge conflicts in the fused multi-source knowledge graph, and semi-supervised learning to compute locally consistent reliability and credibility scores for the documents and sources, respectively. We present validation of proposed method using an open source dataset constructed from the online investigations of MH17 downing in Eastern Ukraine.

  17. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    PubMed

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  18. Client-side Skype forensics: an overview

    NASA Astrophysics Data System (ADS)

    Meißner, Tina; Kröger, Knut; Creutzburg, Reiner

    2013-03-01

    IT security and computer forensics are important components in the information technology. In the present study, a client-side Skype forensics is performed. It is designed to explain which kind of user data are stored on a computer and which tools allow the extraction of those data for a forensic investigation. There are described both methods - a manual analysis and an analysis with (mainly) open source tools, respectively.

  19. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.

  20. From evolutionary computation to the evolution of things.

    PubMed

    Eiben, Agoston E; Smith, Jim

    2015-05-28

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.

  1. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  2. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    PubMed Central

    Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.

    2018-01-01

    Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171

  3. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    PubMed

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  4. The semantic measures library and toolkit: fast computation of semantic similarity and relatedness using biomedical ontologies.

    PubMed

    Harispe, Sébastien; Ranwez, Sylvie; Janaqi, Stefan; Montmain, Jacky

    2014-03-01

    The semantic measures library and toolkit are robust open-source and easy to use software solutions dedicated to semantic measures. They can be used for large-scale computations and analyses of semantic similarities between terms/concepts defined in terminologies and ontologies. The comparison of entities (e.g. genes) annotated by concepts is also supported. A large collection of measures is available. Not limited to a specific application context, the library and the toolkit can be used with various controlled vocabularies and ontology specifications (e.g. Open Biomedical Ontology, Resource Description Framework). The project targets both designers and practitioners of semantic measures providing a JAVA library, as well as a command-line tool that can be used on personal computers or computer clusters. Downloads, documentation, tutorials, evaluation and support are available at http://www.semantic-measures-library.org.

  5. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  6. QuantumOptics.jl: A Julia framework for simulating open quantum systems

    NASA Astrophysics Data System (ADS)

    Krämer, Sebastian; Plankensteiner, David; Ostermann, Laurin; Ritsch, Helmut

    2018-06-01

    We present an open source computational framework geared towards the efficient numerical investigation of open quantum systems written in the Julia programming language. Built exclusively in Julia and based on standard quantum optics notation, the toolbox offers speed comparable to low-level statically typed languages, without compromising on the accessibility and code readability found in dynamic languages. After introducing the framework, we highlight its features and showcase implementations of generic quantum models. Finally, we compare its usability and performance to two well-established and widely used numerical quantum libraries.

  7. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  8. A Stigmergy Approach for Open Source Software Developer Community Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Beaver, Justin M; Potok, Thomas E

    2009-01-01

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agentmore » behaviors selection probability.« less

  9. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  10. R and Spatial Data

    EPA Science Inventory

    R is an open source language and environment for statistical computing and graphics that can also be used for both spatial analysis (i.e. geoprocessing and mapping of different types of spatial data) and spatial data analysis (i.e. the application of statistical descriptions and ...

  11. Science and Software

    NASA Astrophysics Data System (ADS)

    Zelt, C. A.

    2017-12-01

    Earth science attempts to understand how the earth works. This research often depends on software for modeling, processing, inverting or imaging. Freely sharing open-source software is essential to prevent reinventing the wheel and allows software to be improved and applied in ways the original author may never have envisioned. For young scientists, releasing software can increase their name ID when applying for jobs and funding, and create opportunities for collaborations when scientists who collect data want the software's creator to be involved in their project. However, we frequently hear scientists say software is a tool, it's not science. Creating software that implements a new or better way of earth modeling or geophysical processing, inverting or imaging should be viewed as earth science. Creating software for things like data visualization, format conversion, storage, or transmission, or programming to enhance computational performance, may be viewed as computer science. The former, ideally with an application to real data, can be published in earth science journals, the latter possibly in computer science journals. Citations in either case should accurately reflect the impact of the software on the community. Funding agencies need to support more software development and open-source releasing, and the community should give more high-profile awards for developing impactful open-source software. Funding support and community recognition for software development can have far reaching benefits when the software is used in foreseen and unforeseen ways, potentially for years after the original investment in the software development. For funding, an open-source release that is well documented should be required, with example input and output files. Appropriate funding will provide the incentive and time to release user-friendly software, and minimize the need for others to duplicate the effort. All funded software should be available through a single web site, ideally maintained by someone in a funded position. Perhaps the biggest challenge is the reality that researches who use software, as opposed to develop software, are more attractive university hires because they are more likely to be "big picture" scientists that publish in the highest profile journals, although sometimes the two go together.

  12. MATLAB-based algorithm to estimate depths of isolated thin dike-like sources using higher-order horizontal derivatives of magnetic anomalies.

    PubMed

    Ekinci, Yunus Levent

    2016-01-01

    This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.

  13. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  14. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  15. Swan: A tool for porting CUDA programs to OpenCL

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; De Fabritiis, G.

    2011-04-01

    The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal

  16. An open-source textbook for teaching climate-related risk analysis using the R computing environment

    NASA Astrophysics Data System (ADS)

    Applegate, P. J.; Keller, K.

    2015-12-01

    Greenhouse gas emissions lead to increased surface air temperatures and sea level rise. In turn, sea level rise increases the risks of flooding for people living near the world's coastlines. Our own research on assessing sea level rise-related risks emphasizes both Earth science and statistics. At the same time, the free, open-source computing environment R is growing in popularity among statisticians and scientists due to its flexibility and graphics capabilities, as well as its large library of existing functions. We have developed a set of laboratory exercises that introduce students to the Earth science and statistical concepts needed for assessing the risks presented by climate change, particularly sea-level rise. These exercises will be published as a free, open-source textbook on the Web. Each exercise begins with a description of the Earth science and/or statistical concepts that the exercise teaches, with references to key journal articles where appropriate. Next, students are asked to examine in detail a piece of existing R code, and the exercise text provides a clear explanation of how the code works. Finally, students are asked to modify the existing code to produce a well-defined outcome. We discuss our experiences in developing the exercises over two separate semesters at Penn State, plus using R Markdown to interweave explanatory text with sample code and figures in the textbook.

  17. compomics-utilities: an open-source Java library for computational proteomics.

    PubMed

    Barsnes, Harald; Vaudel, Marc; Colaert, Niklaas; Helsens, Kenny; Sickmann, Albert; Berven, Frode S; Martens, Lennart

    2011-03-08

    The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with) spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.

  18. Investigating Advances in the Acquisition of Systems Based on Open Architecture and Open Source Software

    DTIC Science & Technology

    2011-08-01

    dominates the global mobile application market and mobile computing software ecosystems. But overall, OA systems are not necessarily excluded from...License 3.0 (OSL)  Corel Transactional License ( CTL ) The licenses were chosen to represent a variety of kinds of licenses, and include one...proprietary ( CTL ), three academic (Apache, BSD, MIT), and six reciprocal licenses (CPL, EPL, GPL, LGPL, MPL, OSL) that take varying approaches in

  19. An Analysis of the Computer Security Ramifications of Weakened Asymmetric Cryptographic Algorithms

    DTIC Science & Technology

    2012-06-01

    OpenVPN (Yonan). TLS (and by extension SSL) obviously rely on encryption to provide the confidentiality, integrity and authentication services it...Secure Shell (SSH) Transport Layer Protocol.” IETF, Jan. 2006. <tools.ietf.org/html/rfc4253> Yonan, James, and Mattock. " OpenVPN ." SourceForge...11 May 2012. <http://sourceforge.net/projects/ openvpn /> 92 REPORT DOCUMENTATION PAGE Form Approved OMB No. 074-0188 The public reporting

  20. Hybrid cloud: bridging of private and public cloud computing

    NASA Astrophysics Data System (ADS)

    Aryotejo, Guruh; Kristiyanto, Daniel Y.; Mufadhol

    2018-05-01

    Cloud Computing is quickly emerging as a promising paradigm in the recent years especially for the business sector. In addition, through cloud service providers, cloud computing is widely used by Information Technology (IT) based startup company to grow their business. However, the level of most businesses awareness on data security issues is low, since some Cloud Service Provider (CSP) could decrypt their data. Hybrid Cloud Deployment Model (HCDM) has characteristic as open source, which is one of secure cloud computing model, thus HCDM may solve data security issues. The objective of this study is to design, deploy and evaluate a HCDM as Infrastructure as a Service (IaaS). In the implementation process, Metal as a Service (MAAS) engine was used as a base to build an actual server and node. Followed by installing the vsftpd application, which serves as FTP server. In comparison with HCDM, public cloud was adopted through public cloud interface. As a result, the design and deployment of HCDM was conducted successfully, instead of having good security, HCDM able to transfer data faster than public cloud significantly. To the best of our knowledge, Hybrid Cloud Deployment model is one of secure cloud computing model due to its characteristic as open source. Furthermore, this study will serve as a base for future studies about Hybrid Cloud Deployment model which may relevant for solving big security issues of IT-based startup companies especially in Indonesia.

  1. Implementation of highly parallel and large scale GW calculations within the OpenAtom software

    NASA Astrophysics Data System (ADS)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  2. OpenPET Hardware, Firmware, Software, and Board Design Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Nimeh, Faisal; Choong, Woon-Sengq; Moses, William W.

    OpenPET is an open source, flexible, high-performance, and modular data acquisition system for a variety of applications. The OpenPET electronics are capable of reading analog voltage or current signals from a wide variety of sensors. The electronics boards make extensive use of field programmable gate arrays (FPGAs) to provide flexibility and scalability. Firmware and software for the FPGAs and computer are used to control and acquire data from the system. The command and control flow is similar to the data flow, however, the commands are initiated from the computer similar to a tree topology (i.e., from top-to-bottom). Each node inmore » the tree discovers its parent and children, and all addresses are configured accordingly. A user (or a script) initiates a command from the computer. This command will be translated and encoded to the corresponding child (e.g., SB, MB, DB, etc.). Consecutively, each node will pass the command to its corresponding child(ren) by looking at the destination address. Finally, once the command reaches its desired destination(s) the corresponding node(s) execute(s) the command and send(s) a reply, if required. All the firmware, software, and the electronics board design files are distributed through the OpenPET website (http://openpet.lbl.gov).« less

  3. Open Source software and social networks: disruptive alternatives for medical imaging.

    PubMed

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris

    2011-05-01

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate and exchange information is a new model that is particularly suitable for some specific groups of healthcare professional and for physicians. It has also changed the expectations of how patients wish to communicate with their physicians. Emerging disruptive technologies and innovative paradigm such as Open Source software are leading the way to a new generation of information systems that slowly will change the way physicians and healthcare providers as well as patients will interact and communicate in the future. The impact of these new technologies is particularly effective in image communication, PACS and teleradiology. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Scaling Agile Infrastructure to People

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  5. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  6. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  7. Information-Theoretical Analysis of EEG Microstate Sequences in Python.

    PubMed

    von Wegner, Frederic; Laufs, Helmut

    2018-01-01

    We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG) measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A-D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.

  8. The simulated air flow pattern around a moving animal transport vehicle as the basis for a prospective biosecurity risk assessment.

    PubMed

    Seedorf, Jens; Schmidt, Ralf-Gunther

    2017-08-01

    Research that investigates bioaerosol emissions from animal transport vehicles (ATVs) and their importance in the spread of harmful airborne agents while the ATVs travel on roads is limited. To investigate the dynamical behaviour of theoretically released particles from a moving ATV, the open-source computational fluid dynamics (CFD) software OpenFOAM was used to calculate the external and internal air flow fields with passive and forced ventilated openings of a common ATV moving at a speed of 80 km/h. In addition to a computed flow rate of approximately 40,000 m 3 /h crossing the interior of the ATV, the visualization of the trajectories has demonstrated distinct patterns of the spatial distribution of potentially released bioaerosols in the vicinity of the ATV. Although the front openings show the highest air flow to the outside, the recirculations of air masses between the interior of the ATV and the atmosphere also occur, which complicate the emission and the dispersion characterizations. To specify the future emission rates of ATVs, a database of bioaerosol concentrations within the ATV is necessary in conjunction with high-performance computing resources to simulate the potential dispersion of bioaerosols in the environment.

  9. A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.; Chettri, S. S.

    2011-12-01

    We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.

  10. Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework

    NASA Astrophysics Data System (ADS)

    Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.

    2015-12-01

    Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.

  11. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  12. Histomorphometric Parameters of the Growth Plate and Trabecular Bone in Wild-Type and Trefoil Factor Family 3 (Tff3)-Deficient Mice Analyzed by Free and Open-Source Image Processing Software.

    PubMed

    Bijelić, Nikola; Belovari, Tatjana; Stolnik, Dunja; Lovrić, Ivana; Baus Lončar, Mirela

    2017-08-01

    Trefoil factor family 3 (Tff3) peptide is present during intrauterine endochondral ossification in mice, and its deficiency affects cancellous bone quality in secondary ossification centers of mouse tibiae. The aim of this study was to quantitatively analyze parameters describing the growth plate and primary ossification centers in tibiae of 1-month-old wild-type and Tff3 knock-out mice (n=5 per genotype) by using free and open-source software. Digital photographs of the growth plates and trabecular bone were processed by open-source computer programs GIMP and FIJI. Histomorphometric parameters were calculated using measurements made with FIJI. Tff3 knock-out mice had significantly smaller trabecular number and significantly larger trabecular separation. Trabecular bone volume, trabecular bone surface, and trabecular thickness showed no significant difference between the two groups. Although such histomorphological differences were found in the cancellous bone structure, no significant differences were found in the epiphyseal plate histomorphology. Tff3 peptide probably has an effect on the formation and quality of the cancellous bone in the primary ossification centers, but not through disrupting the epiphyseal plate morphology. This work emphasizes the benefits of using free and open-source programs for morphological studies in life sciences.

  13. MzJava: An open source library for mass spectrometry data processing.

    PubMed

    Horlacher, Oliver; Nikitin, Frederic; Alocci, Davide; Mariethoz, Julien; Müller, Markus; Lisacek, Frederique

    2015-11-03

    Mass spectrometry (MS) is a widely used and evolving technique for the high-throughput identification of molecules in biological samples. The need for sharing and reuse of code among bioinformaticians working with MS data prompted the design and implementation of MzJava, an open-source Java Application Programming Interface (API) for MS related data processing. MzJava provides data structures and algorithms for representing and processing mass spectra and their associated biological molecules, such as metabolites, glycans and peptides. MzJava includes functionality to perform mass calculation, peak processing (e.g. centroiding, filtering, transforming), spectrum alignment and clustering, protein digestion, fragmentation of peptides and glycans as well as scoring functions for spectrum-spectrum and peptide/glycan-spectrum matches. For data import and export MzJava implements readers and writers for commonly used data formats. For many classes support for the Hadoop MapReduce (hadoop.apache.org) and Apache Spark (spark.apache.org) frameworks for cluster computing was implemented. The library has been developed applying best practices of software engineering. To ensure that MzJava contains code that is correct and easy to use the library's API was carefully designed and thoroughly tested. MzJava is an open-source project distributed under the AGPL v3.0 licence. MzJava requires Java 1.7 or higher. Binaries, source code and documentation can be downloaded from http://mzjava.expasy.org and https://bitbucket.org/sib-pig/mzjava. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  15. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.

  16. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  17. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  18. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  19. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov Websites

    programming and optimization. Gaussian Chemistry Program for calculating molecular electronic structure and Materials Science Open-source classical molecular dynamics program designed for massively parallel systems framework Q-Chem Chemistry ab initio quantum chemistry package for predictin molecular structures

  20. Internet Research, Uncensored

    ERIC Educational Resources Information Center

    Kean, Sam

    2007-01-01

    In this article, the author discusses a computer program called Psiphon which bypasses government filters undetected. The University of Toronto's Citizen Lab, a research center for digital media and politics, designed Psiphon for technology-savvy activists. Some technology-savvy activists use other open-source software, like Tor (which relies on…

  1. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    EPA Science Inventory

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  2. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS

    PubMed Central

    Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2016-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971

  3. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.

    PubMed

    Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2017-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.

  4. A Platform for Scalable Satellite and Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  5. AstroBlend: An astrophysical visualization package for Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2016-04-01

    The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.

  6. Navigating protected genomics data with UCSC Genome Browser in a Box.

    PubMed

    Haeussler, Maximilian; Raney, Brian J; Hinrichs, Angie S; Clawson, Hiram; Zweig, Ann S; Karolchik, Donna; Casper, Jonathan; Speir, Matthew L; Haussler, David; Kent, W James

    2015-03-01

    Genome Browser in a Box (GBiB) is a small virtual machine version of the popular University of California Santa Cruz (UCSC) Genome Browser that can be run on a researcher's own computer. Once GBiB is installed, a standard web browser is used to access the virtual server and add personal data files from the local hard disk. Annotation data are loaded on demand through the Internet from UCSC or can be downloaded to the local computer for faster access. Software downloads and installation instructions are freely available for non-commercial use at https://genome-store.ucsc.edu/. GBiB requires the installation of open-source software VirtualBox, available for all major operating systems, and the UCSC Genome Browser, which is open source and free for non-commercial use. Commercial use of GBiB and the Genome Browser requires a license (http://genome.ucsc.edu/license/). © The Author 2014. Published by Oxford University Press.

  7. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  8. What makes computational open source software libraries successful?

    NASA Astrophysics Data System (ADS)

    Bangerth, Wolfgang; Heister, Timo

    2013-01-01

    Software is the backbone of scientific computing. Yet, while we regularly publish detailed accounts about the results of scientific software, and while there is a general sense of which numerical methods work well, our community is largely unaware of best practices in writing the large-scale, open source scientific software upon which our discipline rests. This is particularly apparent in the commonly held view that writing successful software packages is largely the result of simply ‘being a good programmer’ when in fact there are many other factors involved, for example the social skill of community building. In this paper, we consider what we have found to be the necessary ingredients for successful scientific software projects and, in particular, for software libraries upon which the vast majority of scientific codes are built today. In particular, we discuss the roles of code, documentation, communities, project management and licenses. We also briefly comment on the impact on academic careers of engaging in software projects.

  9. An Experimental Comparison Between Flexible and Rigid Airfoils at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Uzodinma, Jaylon; Macphee, David

    2017-11-01

    This study uses experimental and computational research methods to compare the aerodynamic performance of rigid and flexible airfoils at a low Reynolds number throughout varying angles of attack. This research can be used to improve the design of small wind turbines, micro-aerial vehicles, and any other devices that operate at low Reynolds numbers. Experimental testing was conducted in the University of Alabama's low-speed wind tunnel, and computational testing was conducted using the open-source CFD code OpenFOAM. For experimental testing, polyurethane-based (rigid) airfoils and silicone-based (flexible) airfoils were constructed using acrylic molds for NACA 0012 and NACA 2412 airfoil profiles. Computer models of the previously-specified airfoils were also created for a computational analysis. Both experimental and computational data were analyzed to examine the critical angles of attack, the lift and drag coefficients, and the occurrence of laminar boundary separation for each airfoil. Moreover, the computational simulations were used to examine the resulting flow fields, in order to provide possible explanations for the aerodynamic performances of each airfoil type. EEC 1659710.

  10. Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis

    PubMed Central

    Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2016-01-01

    Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257

  11. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  12. Macedonian journal of chemistry and chemical engineering: open journal systems--editor's perspective.

    PubMed

    Zdravkovski, Zoran

    2014-01-01

    The development and availability of personal computers and software as well as printing techniques in the last twenty years have made a profound change in the publication of scientific journals. Additionally, the Internet in the last decade has revolutionized the publication process to the point of changing the basic paradigm of printed journals. The Macedonian Journal of Chemistry and Chemical Engineering in its 40-year history has adopted and adapted to all these transformations. In order to keep up with the inevitable changes, as editor-in-chief I felt my responsibility was to introduce an electronic editorial managing of the journal. The choice was between commercial and open source platforms, and because of the limited funding of the journal we chose the latter. We decided on Open Journal Systems, which provided online submission and management of all content, had flexible configuration--requirements, sections, review process, etc., had options for comprehensive indexing, offered various reading tools, had email notification and commenting ability for readers, had an option for thesis abstracts and was installed locally. However, since there is limited support it requires a moderate computer knowledge/skills and effort in order to set up. Overall, it is an excellent editorial platform and a convenient solution for journals with a low budget or journals that do not want to spend their resources on commercial platforms or simply support the idea of open source software.

  13. CosmoQuest: A Cyber-Infrastructure for Crowdsourcing Planetary Surface Mapping and More

    NASA Astrophysics Data System (ADS)

    Gay, P.; Lehan, C.; Moore, J.; Bracey, G.; Gugliucci, N.

    2014-04-01

    The design and implementation of programs to crowdsource science presents a unique set of challenges to system architects, programmers, and designers. The CosmoQuest Citizen Science Builder (CSB) is an open source platform designed to take advantage of crowd computing and open source platforms to solve crowdsourcing problems in Planetary Science. CSB combines a clean user interface with a powerful back end to allow the quick design and deployment of citizen science sites that meet the needs of both the random Joe Public, and the detail driven Albert Professional. In this talk, the software will be overviewed, and the results of usability testing and accuracy testing with both citizen and professional scientists will be discussed.

  14. The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    NASA Astrophysics Data System (ADS)

    Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David

    2017-05-01

    We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.

  15. Spatial Dmbs Architecture for a Free and Open Source Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Valari, E.; Karachaliou, E.; Stylianidis, E.

    2017-08-01

    Recent research on the field of Building Information Modelling (BIM) technology, revealed that except of a few, accessible and free BIM viewers there is a lack of Free & Open Source Software (FOSS) BIM software for the complete BIM process. With this in mind and considering BIM as the technological advancement of Computer-Aided Design (CAD) systems, the current work proposes the use of a FOSS CAD software in order to extend its capabilities and transform it gradually into a FOSS BIM platform. Towards this undertaking, a first approach on developing a spatial Database Management System (DBMS) able to store, organize and manage the overall amount of information within a single application, is presented.

  16. OpenCOR: a modular and interoperable approach to computational biology

    PubMed Central

    Garny, Alan; Hunter, Peter J.

    2015-01-01

    Computational biologists have been developing standards and formats for nearly two decades, with the aim of easing the description and exchange of experimental data, mathematical models, simulation experiments, etc. One of those efforts is CellML (cellml.org), an XML-based markup language for the encoding of mathematical models. Early CellML-based environments include COR and OpenCell. However, both of those tools have limitations and were eventually replaced with OpenCOR (opencor.ws). OpenCOR is an open source modeling environment that is supported on Windows, Linux and OS X. It relies on a modular approach, which means that all of its features come in the form of plugins. Those plugins can be used to organize, edit, simulate and analyze models encoded in the CellML format. We start with an introduction to CellML and two of its early adopters, which limitations eventually led to the development of OpenCOR. We then go onto describing the general philosophy behind OpenCOR, as well as describing its openness and its development process. Next, we illustrate various aspects of OpenCOR, such as its user interface and some of the plugins that come bundled with it (e.g., its editing and simulation plugins). Finally, we discuss some of the advantages and limitations of OpenCOR before drawing some concluding remarks. PMID:25705192

  17. Injecting Artificial Memory Errors Into a Running Computer Program

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  18. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  19. MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.

    PubMed

    Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui

    2015-12-12

    Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.

  20. Soil Monitor: an open source web application for real-time soil sealing monitoring and assessment

    NASA Astrophysics Data System (ADS)

    Langella, Giuliano; Basile, Angelo; Giannecchini, Simone; Iamarino, Michela; Munafò, Michele; Terribile, Fabio

    2016-04-01

    Soil sealing is one of the most important causes of land degradation and desertification. In Europe, soil covered by impermeable materials has increased by about 80% from the Second World War till nowadays, while population has only grown by one third. There is an increasing concern at the high political levels about the need to attenuate imperviousness itself and its effects on soil functions. European Commission promulgated a roadmap (COM(2011) 571) by which the net land take would be zero by 2050. Furthermore, European Commission also published a report in 2011 providing best practices and guidelines for limiting soil sealing and imperviousness. In this scenario, we developed an open source and an open source based Soil Sealing Geospatial Cyber Infrastructure (SS-GCI) named as "Soil Monitor". This tool merges a webGIS with parallel geospatial computation in a fast and dynamic fashion in order to provide real-time assessments of soil sealing at high spatial resolution (20 meters and below) over the whole Italy. Common open source webGIS packages are used to implement both the data management and visualization infrastructures, such as GeoServer and MapStore. The high-speed geospatial computation is ensured by a GPU parallelism using the CUDA (Computing Unified Device Architecture) framework by NVIDIA®. This kind of parallelism required the writing - from scratch - all codes needed to fulfil the geospatial computation built behind the soil sealing toolbox. The combination of GPU computing with webGIS infrastructures is relatively novel and required particular attention at the Java-CUDA programming interface. As a result, Soil Monitor is smart because it can perform very high time-consuming calculations (querying for instance an Italian administrative region as area of interest) in less than one minute. The web application is embedded in a web browser and nothing must be installed before using it. Potentially everybody can use it, but the main targets are the stakeholders dealing with sealing, such as policy makers, land owners and asphalt/cement companies. As a matter of fact, Soil Monitor can be used to improve the spatial planning therefore limiting the progression of disordered soil sealing which causes both the direct loss of soils due to imperviousness but also the indirect loss caused by fragmentation of soils (which has different negative effects on the durability of soil functions, such as habitat corridors). Further, in a future version, Soil Monitor would estimate the best location for a new building or help compensating soil losses by actions in other areas to offset drawbacks at zero. The presented SS-GCI dealing with soil sealing - if opportunely scaled - would aid the implementation of best practices for limiting soil sealing or mitigating its effects on soil functions.

  1. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  2. Architecture Design and Experimental Platform Demonstration of Optical Network based on OpenFlow Protocol

    NASA Astrophysics Data System (ADS)

    Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang

    2016-02-01

    With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.

  3. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  4. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  5. THE BERKELEY DATA ANALYSIS SYSTEM (BDAS): AN OPEN SOURCE PLATFORM FOR BIG DATA ANALYTICS

    DTIC Science & Technology

    2017-09-01

    Evan Sparks, Oliver Zahn, Michael J. Franklin, David A. Patterson, Saul Perlmutter. Scientific Computing Meets Big Data Technology: An Astronomy ...Processing Astronomy Imagery Using Big Data Technology. IEEE Transaction on Big Data, 2016. Approved for Public Release; Distribution Unlimited. 22 [93

  6. DEVELOPMENTS AND APPLICATIONS OF CFD SIMULATIONS OF MICROMETEOROLOGY AND POLLUTION TRANSPORT IN SUPPORT OF AIR QUALITY MODELING

    EPA Science Inventory

    Development and application of computational fluid dynamics (CFD) simulations are being advanced through case studies for simulating air pollutant concentrations from sources within open fields and within complex urban building environments. CFD applications have been under deve...

  7. The Case for Open Source Software: The Interactional Discourse Lab

    ERIC Educational Resources Information Center

    Choi, Seongsook

    2016-01-01

    Computational techniques and software applications for the quantitative content analysis of texts are now well established, and many qualitative data software applications enable the manipulation of input variables and the visualization of complex relations between them via interactive and informative graphical interfaces. Although advances in…

  8. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  9. Visualization and Quality Control Web Tools for CERES Products

    NASA Astrophysics Data System (ADS)

    Mitrescu, C.; Doelling, D. R.; Rutan, D. A.

    2016-12-01

    The CERES project continues to provide the scientific community a wide variety of satellite-derived data products such as observed TOA broadband shortwave and longwave observed fluxes, computed TOA and Surface fluxes, as well as cloud, aerosol, and other atmospheric parameters. They encompass a wide range of temporal and spatial resolutions, suited to specific applications. Now in its 16-year, CERES products are mostly used by climate modeling communities that focus on global mean energetics, meridianal heat transport, and climate trend studies. In order to serve all our users, we developed a web-based Ordering and Visualization Tool (OVT). Using Opens Source Software such as Eclipse, java, javascript, OpenLayer, Flot, Google Maps, python, and others, the OVT Team developed a series of specialized functions to be used in the process of CERES Data Quality Control (QC). We mention 1- and 2-D histogram, anomaly, deseasonalization, temporal and spatial averaging, side-by-side parameter comparison, and others that made the process of QC far easier and faster, but more importantly far more portable. We are now in the process of integrating ground site observed surface fluxes to further facilitate the CERES project to QC the CERES computed surface fluxes. These features will give users the opportunity to perform their own comparisons of the CERES computed surface fluxes and observed ground site fluxes. An overview of the CERES OVT basic functions using Open Source Software, as well as future steps in expanding its capabilities will be presented at the meeting.

  10. Integrating HCI Specialists into Open Source Software Development Projects

    NASA Astrophysics Data System (ADS)

    Hedberg, Henrik; Iivari, Netta

    Typical open source software (OSS) development projects are organized around technically talented developers, whose communication is based on technical aspects and source code. Decision-making power is gained through proven competence and activity in the project, and non-technical end-user opinions are too many times neglected. In addition, also human-computer interaction (HCI) specialists have encountered difficulties in trying to participate in OSS projects, because there seems to be no clear authority and responsibility for them. In this paper, based on HCI and OSS literature, we introduce an extended OSS development project organization model that adds a new level of communication and roles for attending human aspects of software. The proposed model makes the existence of HCI specialists visible in the projects, and promotes interaction between developers and the HCI specialists in the course of a project.

  11. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    PubMed

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  12. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  13. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  14. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  15. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  16. CMOST: an open-source framework for the microsimulation of colorectal cancer screening strategies.

    PubMed

    Prakash, Meher K; Lang, Brian; Heinrich, Henriette; Valli, Piero V; Bauerfeind, Peter; Sonnenberg, Amnon; Beerenwinkel, Niko; Misselwitz, Benjamin

    2017-06-05

    Colorectal cancer (CRC) is a leading cause of cancer-related mortality. CRC incidence and mortality can be reduced by several screening strategies, including colonoscopy, but randomized CRC prevention trials face significant obstacles such as the need for large study populations with long follow-up. Therefore, CRC screening strategies will likely be designed and optimized based on computer simulations. Several computational microsimulation tools have been reported for estimating efficiency and cost-effectiveness of CRC prevention. However, none of these tools is publicly available. There is a need for an open source framework to answer practical questions including testing of new screening interventions and adapting findings to local conditions. We developed and implemented a new microsimulation model, Colon Modeling Open Source Tool (CMOST), for modeling the natural history of CRC, simulating the effects of CRC screening interventions, and calculating the resulting costs. CMOST facilitates automated parameter calibration against epidemiological adenoma prevalence and CRC incidence data. Predictions of CMOST were highly similar compared to a large endoscopic CRC prevention study as well as predictions of existing microsimulation models. We applied CMOST to calculate the optimal timing of a screening colonoscopy. CRC incidence and mortality are reduced most efficiently by a colonoscopy between the ages of 56 and 59; while discounted life years gained (LYG) is maximal at 49-50 years. With a dwell time of 13 years, the most cost-effective screening is at 59 years, at $17,211 discounted USD per LYG. While cost-efficiency varied according to dwell time it did not influence the optimal time point of screening interventions within the tested range. Predictions of CMOST are highly similar compared to a randomized CRC prevention trial as well as those of other microsimulation tools. This open source tool will enable health-economics analyses in for various countries, health-care scenarios and CRC prevention strategies. CMOST is freely available under the GNU General Public License at https://gitlab.com/misselwb/CMOST.

  17. Noise Radiation Of A Strongly Pulsating Tailpipe Exhaust

    NASA Astrophysics Data System (ADS)

    Peizi, Li; Genhua, Dai; Zhichi, Zhu

    1993-11-01

    The method of characteristics is used to solve the problem of the propagation of a strongly pulsating flow in an exhaust system tailpipe. For a strongly pulsating exhaust, the flow may shock at the pipe's open end at some point in a pulsating where the flow pressure exceeds its critical value. The method fails if one insists on setting the flow pressure equal to the atmospheric pressure as the pipe end boundary condition. To solve the problem, we set the Mach number equal to 1 as the boundary condition when the flow pressure exceeds its critical value. For a strongly pulsating flow, the fluctuations of flow variables may be much higher than their respective time averages. Therefore, the acoustic radiation method would fail in the computation of the noise radiation from the pipe's open end. We simulate the exhaust flow out of the open end as a simple sound source to compute the noise radiation, which has been successfully applied in reference [1]. The simple sound source strength is proportional to the volume acceleration of exhaust gas. Also computed is the noise radiation from the turbulence of the exhaust flow, as was done in reference [1]. Noise from a reciprocating valve simulator has been treated in detail. The radiation efficiency is very low for the pressure range considered and is about 10 -5. The radiation efficiency coefficient increases with the square of the frequency. Computation of the pipe length dependence of the noise radiation and mass flux allows us to design a suitable length for an aerodynamic noise generator or a reciprocating internal combustion engine. For the former, powerful noise radiation is preferable. For the latter, maximum mass flux is desired because a freer exhaust is preferable.

  18. Numerical tools to predict the environmental loads for offshore structures under extreme weather conditions

    NASA Astrophysics Data System (ADS)

    Wu, Yanling

    2018-05-01

    In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.

  19. PhysioNet: physiologic signals, time series and related open source software for basic, clinical, and applied research.

    PubMed

    Moody, George B; Mark, Roger G; Goldberger, Ary L

    2011-01-01

    PhysioNet provides free web access to over 50 collections of recorded physiologic signals and time series, and related open-source software, in support of basic, clinical, and applied research in medicine, physiology, public health, biomedical engineering and computing, and medical instrument design and evaluation. Its three components (PhysioBank, the archive of signals; PhysioToolkit, the software library; and PhysioNetWorks, the virtual laboratory for collaborative development of future PhysioBank data collections and PhysioToolkit software components) connect researchers and students who need physiologic signals and relevant software with researchers who have data and software to share. PhysioNet's annual open engineering challenges stimulate rapid progress on unsolved or poorly solved questions of basic or clinical interest, by focusing attention on achievable solutions that can be evaluated and compared objectively using freely available reference data.

  20. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  1. Multilevel Parallelization of AutoDock 4.2.

    PubMed

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  2. Development of efficient and cost-effective distributed hydrological modeling tool MWEasyDHM based on open-source MapWindow GIS

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao

    2011-09-01

    Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.

  3. New additions to the cancer precision medicine toolkit.

    PubMed

    Mardis, Elaine R

    2018-04-13

    New computational and database-driven tools are emerging to aid in the interpretation of cancer genomic data as its use becomes more common in clinical evidence-based cancer medicine. Two such open source tools, published recently in Genome Medicine, provide important advances to address the clinical cancer genomics data interpretation bottleneck.

  4. DSB Task Force on Cyber Supply Chain

    DTIC Science & Technology

    2017-02-06

    27 3.4 Cybersecurity for Commercial and Open Source Components...Communications and Intelligence ASD(L&MR): Assistant Secretary of Defense for Logistics and Materiel Readiness ASD(R&E): Assistant Secretary of Defense...system BSIMM: Building Security in Maturity Model C4ISR: command, control, communications, computers, intelligence , surveillance and

  5. The Virtual Genetics Lab: A Freely-Available Open-Source Genetics Simulation

    ERIC Educational Resources Information Center

    White, Brian; Bolker, Ethan; Koolar, Nikunj; Ma, Wei; Maw, Naing Naing; Yu, Chung Ying

    2007-01-01

    This lab is a computer simulation of transmission genetics. It presents students with a genetic phenomenon--the inheritance of a randomly--selected trait. The students' task is to determine how this trait is inherited by designing their own crosses and analyzing the results produced by the software.

  6. Decision-Theoretic Methods in Simulation Optimization

    DTIC Science & Technology

    2014-09-24

    freely available, and is also being considered for use by Netflix . • [Scott et al., 2011] provides an easily computed approximation to the knowledge...Area, such as Netflix . It has received a great deal of attention from the com- munity, and is already quite popular on the open-source software

  7. Homology Modeling and Molecular Docking for the Science Curriculum

    ERIC Educational Resources Information Center

    McDougal, Owen M.; Cornia, Nic; Sambasivarao, S. V.; Remm, Andrew; Mallory, Chris; Oxford, Julia Thom; Maupin, C. Mark; Andersen, Tim

    2014-01-01

    DockoMatic 2.0 is a powerful open source software program (downloadable from sourceforge.net) that allows users to utilize a readily accessible computational tool to explore biomolecules and their interactions. This manuscript describes a practical tutorial for use in the undergraduate curriculum that introduces students to macromolecular…

  8. Engaging Language Learners through Technology Integration: Theory, Applications, and Outcomes

    ERIC Educational Resources Information Center

    Li, Shuai, Ed.; Swanson, Peter, Ed.

    2014-01-01

    Web 2.0 technologies, open source software platforms, and mobile applications have transformed teaching and learning of second and foreign languages. Language teaching has transitioned from a teacher-centered approach to a student-centered approach through the use of Computer-Assisted Language Learning (CALL) and new teaching approaches.…

  9. Frameworks Coordinate Scientific Data Management

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.

  10. Wearable Notification via Dissemination Service in a Pervasive Computing Environment

    DTIC Science & Technology

    2015-09-01

    context, state, and environment in a manner that would be transparent to a Soldier’s common operations. 15. SUBJECT TERMS pervasive computing, Android ...of user context shifts, i.e., changes in the user’s position, history , workflow, or resource interests. If the PCE is described as a 2-component...convenient viewing on the Glass’s screen just above the line of sight. All of the software developed uses Google’s Android open-source software stack

  11. Visualization, documentation, analysis, and communication of large scale gene regulatory networks

    PubMed Central

    Longabaugh, William J.R.; Davidson, Eric H.; Bolouri, Hamid

    2009-01-01

    Summary Genetic regulatory networks (GRNs) are complex, large-scale, and spatially and temporally distributed. These characteristics impose challenging demands on computational GRN modeling tools, and there is a need for custom modeling tools. In this paper, we report on our ongoing development of BioTapestry, an open source, freely available computational tool designed specifically for GRN modeling. We also outline our future development plans, and give some examples of current applications of BioTapestry. PMID:18757046

  12. Direct measurement of the speed of sound using a microphone and a speaker

    NASA Astrophysics Data System (ADS)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-05-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is calculated. The result is in very good agreement with the reported value in the literature.

  13. Integrating Clinical Trial Imaging Data Resources Using Service-Oriented Architecture and Grid Computing

    PubMed Central

    Cladé, Thierry; Snyder, Joshua C.

    2010-01-01

    Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775

  14. Mission Driven Scene Understanding: Candidate Model Training and Validation

    DTIC Science & Technology

    2016-09-01

    driven scene understanding. One of the candidate engines that we are evaluating is a convolutional neural network (CNN) program installed on a Windows 10...Theano-AlexNet6,7) installed on a Windows 10 notebook computer. To the best of our knowledge, an implementation of the open-source, Python-based...AlexNet CNN on a Windows notebook computer has not been previously reported. In this report, we present progress toward the proof-of-principle testing

  15. A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software

    NASA Technical Reports Server (NTRS)

    Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.

    2016-01-01

    This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.

  16. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  17. Automated Reporting of DXA Studies Using a Custom-Built Computer Program.

    PubMed

    England, Joseph R; Colletti, Patrick M

    2018-06-01

    Dual-energy x-ray absorptiometry (DXA) scans are a critical population health tool and relatively simple to interpret but can be time consuming to report, often requiring manual transfer of bone mineral density and associated statistics into commercially available dictation systems. We describe here a custom-built computer program for automated reporting of DXA scans using Pydicom, an open-source package built in the Python computer language, and regular expressions to mine DICOM tags for patient information and bone mineral density statistics. This program, easy to emulate by any novice computer programmer, has doubled our efficiency at reporting DXA scans and has eliminated dictation errors.

  18. A Dashboard for the Italian Computing in ALICE

    NASA Astrophysics Data System (ADS)

    Elia, D.; Vino, G.; Bagnasco, S.; Crescente, A.; Donvito, G.; Franco, A.; Lusso, S.; Mura, D.; Piano, S.; Platania, G.; ALICE Collaboration

    2017-10-01

    A dashboard devoted to the computing in the Italian sites for the ALICE experiment at the LHC has been deployed. A combination of different complementary monitoring tools is typically used in most of the Tier-2 sites: this makes somewhat difficult to figure out at a glance the status of the site and to compare information extracted from different sources for debugging purposes. To overcome these limitations a dedicated ALICE dashboard has been designed and implemented in each of the ALICE Tier-2 sites in Italy: in particular, it provides a single, interactive and easily customizable graphical interface where heterogeneous data are presented. The dashboard is based on two main ingredients: an open source time-series database and a dashboard builder tool for visualizing time-series metrics. Various sensors, able to collect data from the multiple data sources, have been also written. A first version of a national computing dashboard has been implemented using a specific instance of the builder to gather data from all the local databases.

  19. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  20. Visualization and Quality Control Web Tools for CERES Products

    NASA Astrophysics Data System (ADS)

    Mitrescu, C.; Doelling, D. R.

    2017-12-01

    The NASA CERES project continues to provide the scientific communities a wide variety of satellite-derived data products such as observed TOA broadband shortwave and longwave observed fluxes, computed TOA and Surface fluxes, as well as cloud, aerosol, and other atmospheric parameters. They encompass a wide range of temporal and spatial resolutions, suited to specific applications. CERES data is used mostly by climate modeling communities but also by a wide variety of educational institutions. To better serve our users, a web-based Ordering and Visualization Tool (OVT) was developed by using Opens Source Software such as Eclipse, java, javascript, OpenLayer, Flot, Google Maps, python, and others. Due to increased demand by our own scientists, we also implemented a series of specialized functions to be used in the process of CERES Data Quality Control (QC) such as 1- and 2-D histograms, anomalies and differences, temporal and spatial averaging, side-by-side parameter comparison, and others that made the process of QC far easier and faster, but more importantly far more portable. With the integration of ground site observed surface fluxes we further facilitate the CERES project to QC the CERES computed surface fluxes. An overview of the CERES OVT basic functions using Open Source Software, as well as future steps in expanding its capabilities will be presented at the meeting.

  1. Recent achievements in real-time computational seismology in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  2. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  3. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  4. SlicerRT: radiation therapy research toolkit for 3D Slicer.

    PubMed

    Pinter, Csaba; Lasso, Andras; Wang, An; Jaffray, David; Fichtinger, Gabor

    2012-10-01

    Interest in adaptive radiation therapy research is constantly growing, but software tools available for researchers are mostly either expensive, closed proprietary applications, or free open-source packages with limited scope, extensibility, reliability, or user support. To address these limitations, we propose SlicerRT, a customizable, free, and open-source radiation therapy research toolkit. SlicerRT aspires to be an open-source toolkit for RT research, providing fast computations, convenient workflows for researchers, and a general image-guided therapy infrastructure to assist clinical translation of experimental therapeutic approaches. It is a medium into which RT researchers can integrate their methods and algorithms, and conduct comparative testing. SlicerRT was implemented as an extension for the widely used 3D Slicer medical image visualization and analysis application platform. SlicerRT provides functionality specifically designed for radiation therapy research, in addition to the powerful tools that 3D Slicer offers for visualization, registration, segmentation, and data management. The feature set of SlicerRT was defined through consensus discussions with a large pool of RT researchers, including both radiation oncologists and medical physicists. The development processes used were similar to those of 3D Slicer to ensure software quality. Standardized mechanisms of 3D Slicer were applied for documentation, distribution, and user support. The testing and validation environment was configured to automatically launch a regression test upon each software change and to perform comparison with ground truth results provided by other RT applications. Modules have been created for importing and loading DICOM-RT data, computing and displaying dose volume histograms, creating accumulated dose volumes, comparing dose volumes, and visualizing isodose lines and surfaces. The effectiveness of using 3D Slicer with the proposed SlicerRT extension for radiation therapy research was demonstrated on multiple use cases. A new open-source software toolkit has been developed for radiation therapy research. SlicerRT can import treatment plans from various sources into 3D Slicer for visualization, analysis, comparison, and processing. The provided algorithms are extensively tested and they are accessible through a convenient graphical user interface as well as a flexible application programming interface.

  5. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    NASA Astrophysics Data System (ADS)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  6. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  7. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.

    PubMed

    Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M

    2009-09-30

    QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

  8. 3D Printing of CT Dataset: Validation of an Open Source and Consumer-Available Workflow.

    PubMed

    Bortolotto, Chandra; Eshja, Esmeralda; Peroni, Caterina; Orlandi, Matteo A; Bizzotto, Nicola; Poggi, Paolo

    2016-02-01

    The broad availability of cheap three-dimensional (3D) printing equipment has raised the need for a thorough analysis on its effects on clinical accuracy. Our aim is to determine whether the accuracy of 3D printing process is affected by the use of a low-budget workflow based on open source software and consumer's commercially available 3D printers. A group of test objects was scanned with a 64-slice computed tomography (CT) in order to build their 3D copies. CT datasets were elaborated using a software chain based on three free and open source software. Objects were printed out with a commercially available 3D printer. Both the 3D copies and the test objects were measured using a digital professional caliper. Overall, the objects' mean absolute difference between test objects and 3D copies is 0.23 mm and the mean relative difference amounts to 0.55 %. Our results demonstrate that the accuracy of 3D printing process remains high despite the use of a low-budget workflow.

  9. GeNeDA: An Open-Source Workflow for Design Automation of Gene Regulatory Networks Inspired from Microelectronics.

    PubMed

    Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques

    2016-10-01

    The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.

  10. ROSE Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, D.; Yi, Q.; Buduc, R.

    2005-02-17

    ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less

  11. Three-dimensional printing physiology laboratory technology.

    PubMed

    Sulkin, Matthew S; Widder, Emily; Shao, Connie; Holzem, Katherine M; Gloschat, Christopher; Gutbrod, Sarah R; Efimov, Igor R

    2013-12-01

    Since its inception in 19th-century Germany, the physiology laboratory has been a complex and expensive research enterprise involving experts in various fields of science and engineering. Physiology research has been critically dependent on cutting-edge technological support of mechanical, electrical, optical, and more recently computer engineers. Evolution of modern experimental equipment is constrained by lack of direct communication between the physiological community and industry producing this equipment. Fortunately, recent advances in open source technologies, including three-dimensional printing, open source hardware and software, present an exciting opportunity to bring the design and development of research instrumentation to the end user, i.e., life scientists. Here we provide an overview on how to develop customized, cost-effective experimental equipment for physiology laboratories.

  12. gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data

    NASA Astrophysics Data System (ADS)

    Hummel, Jacob A.

    2016-11-01

    We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.

  13. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  14. XaNSoNS: GPU-accelerated simulator of diffraction patterns of nanoparticles

    NASA Astrophysics Data System (ADS)

    Neverov, V. S.

    XaNSoNS is an open source software with GPU support, which simulates X-ray and neutron 1D (or 2D) diffraction patterns and pair-distribution functions (PDF) for amorphous or crystalline nanoparticles (up to ∼107 atoms) of heterogeneous structural content. Among the multiple parameters of the structure the user may specify atomic displacements, site occupancies, molecular displacements and molecular rotations. The software uses general equations nonspecific to crystalline structures to calculate the scattering intensity. It supports four major standards of parallel computing: MPI, OpenMP, Nvidia CUDA and OpenCL, enabling it to run on various architectures, from CPU-based HPCs to consumer-level GPUs.

  15. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  16. Development of a software for quantitative evaluation radiotherapy target and organ-at-risk segmentation comparison.

    PubMed

    Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D

    2014-02-01

    Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.

  17. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025

  18. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.

  19. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  20. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Amjad Majid; Albert, Don; Andersson, Par

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.

  2. SIG Contribution in the Making of Geotechnical Maps in Urban Areas

    NASA Astrophysics Data System (ADS)

    Monteiro, António; Pais, Luís Andrade; Rodrigues, Carlos; Carvalho, Paulo

    2017-10-01

    The use of Geographic Information Systems (GIS) has spread to several science areas, from oceanography to geotechnics. Its application in the urban mapping was intensified in the last century, which allowed a great development, due to the use of geographic database, new analysis tools and, more recently, free open source software. Geotechnical cartography struggle with a permanent and large environment re-organization in urban area, due to new building construction, trenching and the drilling of sampling wells and holes. This creates an extra important and largest volume of data at any pre-existence geological map. The main problem results on the fact that the natural environment is covered with buildings and communications system. The purpose of this work is to create a viable geographic information base for geotechnical mapping through a free GIS computer program and open source, with non-traditional cartographic sources, giving preference to open platforms. QGIS was used as software and “Google Maps”, “Bing Maps” and “OpenStreetMap” were applied as cartographic sources using the “OpenLayers plugin” module. Finally, we also pretend to identify and delimit the degree of granite’s change and fracturing areas using a “Streetview” platform. This model has cartographic input which are a geological map study area, open cartographic web archives and the use of “Streetview” platform. The output has several layouts, such as topography intersection (roads, borders, etc.), with geological map and the bordering area of Guarda Urban Zone. The use of this platform types decrease the collect data time and, sometimes, a careful observation of pictures that were taken during excavations may reveal important details for geological mapping in the study area.

  3. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  4. U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly

    2008-04-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.

  5. A low-cost computer-controlled Arduino-based educational laboratory system for teaching the fundamentals of photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-11-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental concepts of semiconductor physics by exploring the process of an experimental physics inquiry. The system runs under the Windows operating system and is composed of a data acquisition/control board, a power supply and processing boards, sensing elements, a graphical user interface and data analysis software. The data acquisition/control board is based on the Arduino open source electronics prototyping platform. The graphical user interface and communication with the Arduino are developed in C# and C++ programming languages respectively, by using IDE Microsoft Visual Studio 2010 Professional, which is freely available to students. Finally, the data analysis is performed by using the open source, object-oriented framework ROOT. Currently the system supports five teaching activities, each one corresponding to an independent tab in the user interface. SolarInsight has been partially developed in the context of a diploma thesis conducted within the Technological Educational Institute of Piraeus under the co-supervision of the Physics and Electronic Computer Systems departments’ academic staff.

  6. Integrating an Educational Game in Moodle LMS

    ERIC Educational Resources Information Center

    Minovic, Miroslav; Milovanovic, Milos; Minovic, Jelena; Starcevic, Dusan

    2012-01-01

    The authors present a learning platform based on a computer game. Learning games combine two industries: education and entertainment, which is often called "Edutainment." The game is realized as a strategic game (similar to Risk[TM]), implemented as a module for Moodle CMS, utilizing Java Applet technology. Moodle is an open-source course…

  7. Linux Adventures on a Laptop. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  8. Processor Emulator with Benchmark Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  9. Institutional Expansion: The Case of Grid Computing

    ERIC Educational Resources Information Center

    Kertcher, Zack

    2010-01-01

    Evolutionary and revolutionary approaches have dominated the study of scientific, technological and institutional change. Yet, being focused on change within a single field, these approaches have been mute about a third, pervasive process. This process is found in a variety of cases that range from open source software to the Monte Carlo method to…

  10. INNOVATIVE APPROACH FOR MEASURING AMMONIA AND METHANE FLUXES FROM A HOG FARM USING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROSCOPY

    EPA Science Inventory

    The paper describes a new approach to quantify emissions from area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) technique and computed tomography (CT) technique. In this study, an...

  11. Teacher's Corner: Structural Equation Modeling with the Sem Package in R

    ERIC Educational Resources Information Center

    Fox, John

    2006-01-01

    R is free, open-source, cooperatively developed software that implements the S statistical programming language and computing environment. The current capabilities of R are extensive, and it is in wide use, especially among statisticians. The sem package provides basic structural equation modeling facilities in R, including the ability to fit…

  12. Biomolecules in the Computer: Jmol to the Rescue

    ERIC Educational Resources Information Center

    Herraez, Angel

    2006-01-01

    Jmol is free, open source software for interactive molecular visualization. Since it is written in the Java[TM] programming language, it is compatible with all major operating systems and, in the applet form, with most modern web browsers. This article summarizes Jmol development and features that make it a valid and promising replacement for…

  13. Online Grading of Calculations in General Chemistry Laboratory Write-Ups

    ERIC Educational Resources Information Center

    Silva, Alexsandra; Gonzales, Robert; Brennan, Daniel P.

    2010-01-01

    In the past, there were frequently complaints about the grading of laboratory reports in our laboratory chemistry courses. This article discussed the implementation of an online submission of laboratory acquired data using LON-CAPA (The Learning Online Network with Computer-Assisted Personalized Approach), which is an open source management and…

  14. Podcast Pilots for Distance Planning, Programming, and Development

    ERIC Educational Resources Information Center

    Cordes, Sean

    2005-01-01

    This paper examines podcasting as a library support for distance learning and information systems and services. The manuscript provides perspective on the knowledge base in the growing area of podcasting in libraries and academia. A walkthrough of the podcast creation and distribution process using basic computing skills and open source tools is…

  15. Confronting Cyberterrorism with Cyber Deception

    DTIC Science & Technology

    2003-12-01

    break into computer systems. A further development in social engineering is the use of online translators and 41 relay telephony services that...allow social engineers to exploit and overcome language barriers [Ollmann, 2003]. Relay telephony services are online services provided by...open source media or actively seeking the information through unscrupulous means. - Desk checking - Social engineering - Dumpster diving

  16. Peregrine Software Toolchains | High-Performance Computing | NREL

    Science.gov Websites

    toolchain is an open-source alternative against which many technical applications are natively developed and tested. The Portland Group compilers are not fully supported, but are available to the HPC community. Use Group (PGI) C/C++ and Fortran (partially supported) The PGI Accelerator compilers include NVIDIA GPU

  17. Using R in Introductory Statistics Courses with the pmg Graphical User Interface

    ERIC Educational Resources Information Center

    Verzani, John

    2008-01-01

    The pmg add-on package for the open source statistics software R is described. This package provides a simple to use graphical user interface (GUI) that allows introductory statistics students, without advanced computing skills, to quickly create the graphical and numeric summaries expected of them. (Contains 9 figures.)

  18. Tessera: Open source software for accelerated data science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Hafen, Ryan P.; Director, Hannah M.

    2014-06-30

    Extracting useful, actionable information from data can be a formidable challenge for the safeguards, nonproliferation, and arms control verification communities. Data scientists are often on the “front-lines” of making sense of complex and large datasets. They require flexible tools that make it easy to rapidly reformat large datasets, interactively explore and visualize data, develop statistical algorithms, and validate their approaches—and they need to perform these activities with minimal lines of code. Existing commercial software solutions often lack extensibility and the flexibility required to address the nuances of the demanding and dynamic environments where data scientists work. To address this need,more » Pacific Northwest National Laboratory developed Tessera, an open source software suite designed to enable data scientists to interactively perform their craft at the terabyte scale. Tessera automatically manages the complicated tasks of distributed storage and computation, empowering data scientists to do what they do best: tackling critical research and mission objectives by deriving insight from data. We illustrate the use of Tessera with an example analysis of computer network data.« less

  19. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  20. Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Frederick, J. M.; Hammond, G. E.

    2017-12-01

    Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A

  1. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  2. GALARIO: a GPU accelerated library for analysing radio interferometer observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2018-06-01

    We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.

  3. OpenHealth Platform for Interactive Contextualization of Population Health Open Data.

    PubMed

    Almeida, Jonas S; Hajagos, Janos; Crnosija, Ivan; Kurc, Tahsin; Saltz, Mary; Saltz, Joel

    The financial incentives for data science applications leading to improved health outcomes, such as DSRIP (bit.ly/dsrip), are well-aligned with the broad adoption of Open Data by State and Federal agencies. This creates entirely novel opportunities for analytical applications that make exclusive use of the pervasive Web Computing platform. The framework described here explores this new avenue to contextualize Health data in a manner that relies exclusively on the native JavaScript interpreter and data processing resources of the ubiquitous Web Browser. The OpenHealth platform is made publicly available, and is publicly hosted with version control and open source, at https://github.com/mathbiol/openHealth. The different data/analytics workflow architectures explored are accompanied with live applications ranging from DSRIP, such as Hospital Inpatient Prevention Quality Indicators at http://bit.ly/pqiSuffolk, to The Cancer Genome Atlas (TCGA) as illustrated by http://bit.ly/tcgascopeGBM.

  4. Final Report. Center for Scalable Application Development Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less

  5. Singularity: Scientific containers for mobility of compute.

    PubMed

    Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

  6. Singularity: Scientific containers for mobility of compute

    PubMed Central

    Kurtzer, Gregory M.; Bauer, Michael W.

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014

  7. Assessment in health care education - modelling and implementation of a computer supported scoring process.

    PubMed

    Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil

    2012-01-01

    Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.

  8. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  9. An Investigation of the Influence of Waves on Sediment Processes in Skagit Bay

    DTIC Science & Technology

    2011-09-30

    source term parameterizations common to most surface wave models, including wave generation by wind , energy dissipation from whitecapping, and...I. Total energy and peak frequency. Coastal Engineering (29), 47-78. Zijlema, M. Computation of wind -wave spectra in coastal waters with SWAN on unstructured grids Coastal Engineering, 2010, 57, 267-277 ...supply and wind on tidal flat sediment transport. It will be used to evaluate the capabilities of state-of-the-art open source sediment models and to

  10. Numerical simulation of the SOFIA flowfield

    NASA Technical Reports Server (NTRS)

    Klotz, Stephen P.

    1994-01-01

    This report provides a concise summary of the contribution of computational fluid dynamics (CFD) to the SOFIA (Stratospheric Observatory for Infrared Astronomy) project at NASA Ames and presents results obtained from closed- and open-cavity SOFIA simulations. The aircraft platform is a Boeing 747SP and these are the first SOFIA simulations run with the aircraft empennage included in the geometry database. In the open-cavity run the telescope is mounted behind the wings. Results suggest that the cavity markedly influences the mean pressure distribution on empennage surfaces and that 110-140 decibel (db) sound pressure levels are typical in the cavity and on the horizontal and vertical stabilizers. A strong source of sound was found to exist on the rim of the open telescope cavity. The presence of this source suggests that additional design work needs to be performed in order to minimize the sound emanating from that location. A fluid dynamic analysis of the engine plumes is also contained in this report. The analysis was part of an effort to quantify the degradation of telescope performance resulting from the proximity of the port engine exhaust plumes to the open telescope bay.

  11. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  12. The Chemistry Development Kit (CDK) v2.0: atom typing, depiction, molecular formulas, and substructure searching.

    PubMed

    Willighagen, Egon L; Mayfield, John W; Alvarsson, Jonathan; Berg, Arvid; Carlsson, Lars; Jeliazkova, Nina; Kuhn, Stefan; Pluskal, Tomáš; Rojas-Chertó, Miquel; Spjuth, Ola; Torrance, Gilleain; Evelo, Chris T; Guha, Rajarshi; Steinbeck, Christoph

    2017-06-06

    The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms. We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism. This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software. Graphical abstract CDK 2.0 provides new features and improved performance.

  13. The Computational Infrastructure for Geodynamics as a Community of Practice

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  14. Ising Processing Units: Potential and Challenges for Discrete Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffrin, Carleton James; Nagarajan, Harsha; Bent, Russell Whitford

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one examplemore » of a commercially available Ising processing unit.« less

  15. PV_LIB Toolbox v. 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.

  16. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  17. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  18. Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project

    NASA Astrophysics Data System (ADS)

    Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.

  19. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  20. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. SED-ED, a workflow editor for computational biology experiments written in SED-ML.

    PubMed

    Adams, Richard R

    2012-04-15

    The simulation experiment description markup language (SED-ML) is a new community data standard to encode computational biology experiments in a computer-readable XML format. Its widespread adoption will require the development of software support to work with SED-ML files. Here, we describe a software tool, SED-ED, to view, edit, validate and annotate SED-ML documents while shielding end-users from the underlying XML representation. SED-ED supports modellers who wish to create, understand and further develop a simulation description provided in SED-ML format. SED-ED is available as a standalone Java application, as an Eclipse plug-in and as an SBSI (www.sbsi.ed.ac.uk) plug-in, all under an MIT open-source license. Source code is at https://sed-ed-sedmleditor.googlecode.com/svn. The application itself is available from https://sourceforge.net/projects/jlibsedml/files/SED-ED/.

  2. AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8

    2015-08-15

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less

  3. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  4. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  5. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  6. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  7. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  8. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  9. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  10. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  11. Numerical simulations of the flow with the prescribed displacement of the airfoil and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Řidký, V.; Šidlof, P.; Vlček, V.

    2013-04-01

    The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.

  12. Youpi: YOUr processing PIpeline

    NASA Astrophysics Data System (ADS)

    Monnerville, Mathias; Sémah, Gregory

    2012-03-01

    Youpi is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. Built on top of various open source reduction tools released to the community by TERAPIX (http://terapix.iap.fr), Youpi can help organize data, manage processing jobs on a computer cluster in real time (using Condor) and facilitate teamwork by allowing fine-grain sharing of results and data. Youpi is modular and comes with plugins which perform, from within a browser, various processing tasks such as evaluating the quality of incoming images (using the QualityFITS software package), computing astrometric and photometric solutions (using SCAMP), resampling and co-adding FITS images (using SWarp) and extracting sources and building source catalogues from astronomical images (using SExtractor). Youpi is useful for small to medium-sized data reduction projects; it is free and is published under the GNU General Public License.

  13. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  14. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less

  15. Open source data logger for low-cost environmental monitoring

    PubMed Central

    2014-01-01

    Abstract The increasing transformation of biodiversity into a data-intensive science has seen numerous independent systems linked and aggregated into the current landscape of biodiversity informatics. This paper outlines how we can move forward with this programme, incorporating real time environmental monitoring into our methodology using low-power and low-cost computing platforms. PMID:24855446

  16. Using Tracker as a Pedagogical Tool for Understanding Projectile Motion

    ERIC Educational Resources Information Center

    Wee, Loo Kang; Chew, Charles; Goh, Giam Hwee; Tan, Samuel; Lee, Tat Leong

    2012-01-01

    This article reports on the use of Tracker as a pedagogical tool in the effective learning and teaching of projectile motion in physics. When a computer model building learning process is supported and driven by video analysis data, this free Open Source Physics tool can provide opportunities for students to engage in active enquiry-based…

  17. MGtoolkit: A python package for implementing metagraphs

    NASA Astrophysics Data System (ADS)

    Ranathunga, D.; Nguyen, H.; Roughan, M.

    In this paper we present MGtoolkit: an open-source Python package for implementing metagraphs - a first of its kind. Metagraphs are commonly used to specify and analyse business and computer-network policies alike. MGtoolkit can help verify such policies and promotes learning and experimentation with metagraphs. The package currently provides purely textual output for visualising metagraphs and their analysis results.

  18. Heat simulation via Scilab programming

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul

    2014-07-01

    This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.

  19. A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium

    ERIC Educational Resources Information Center

    Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.

    2016-01-01

    In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…

  20. Leveraging Open Source Software in the Education Management and Leadership Training

    ERIC Educational Resources Information Center

    Nordin, Norazah; Ibrahim, Sham; Mohd. Hamzah, Mohd. Izham; Embi, Mohamed Amin; Din, Rosseni

    2012-01-01

    The development in information technology has now moved from the first wave that emphasises on computer technical skills to the second wave which focuses on the application and management aspects. This paper aims to investigate the use of learning management system among future school heads in education management and leadership. The study was…

  1. Higgs bosons with large transverse momentum at the LHC

    NASA Astrophysics Data System (ADS)

    Kudashkin, Kirill; Lindert, Jonas M.; Melnikov, Kirill; Wever, Christopher

    2018-07-01

    We compute the next-to-leading order QCD corrections to the production of Higgs bosons with large transverse momentum p⊥ ≫ 2mt at the LHC. To accomplish this, we combine the two-loop amplitudes for processes gg → Hg, qg → Hq and q q bar → Hg, recently computed in the approximation of nearly massless top quarks, with the numerical calculation of the squared one-loop amplitudes for gg → Hgg, qg → Hqg and q q bar → Hgg processes. The latter computation is performed with OpenLoops. We find that the QCD corrections to the Higgs transverse momentum distribution at very high p⊥ are large but quite similar to the QCD corrections obtained for point-like Hgg coupling. Our result removes one of the largest sources of theoretical uncertainty in the description of high-p⊥ Higgs boson production and opens a way to use the high-p⊥ region to search for physics beyond the Standard Model.

  2. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  3. ABrox-A user-friendly Python module for approximate Bayesian computation with a focus on model comparison.

    PubMed

    Mertens, Ulf Kai; Voss, Andreas; Radev, Stefan

    2018-01-01

    We give an overview of the basic principles of approximate Bayesian computation (ABC), a class of stochastic methods that enable flexible and likelihood-free model comparison and parameter estimation. Our new open-source software called ABrox is used to illustrate ABC for model comparison on two prominent statistical tests, the two-sample t-test and the Levene-Test. We further highlight the flexibility of ABC compared to classical Bayesian hypothesis testing by computing an approximate Bayes factor for two multinomial processing tree models. Last but not least, throughout the paper, we introduce ABrox using the accompanied graphical user interface.

  4. Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces

    NASA Technical Reports Server (NTRS)

    Tallman, Aaron; Wang, Yan; Vargas, Mario

    2017-01-01

    To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.

  5. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/ PMID:22541598

  7. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    PubMed

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/

  8. Aeolian Sediment Trapping Efficiencies of Sparse Vegetation and its Ecohydrological Consequences in Drylands

    NASA Astrophysics Data System (ADS)

    Gonzales, H. B.; Ravi, S.; Li, J. J.; Sankey, J. B.

    2016-12-01

    Hydrological and aeolian processes control the redistribution of soil and nutrients in arid and semi arid environments thereby contributing to the formation of heterogeneous patchy landscapes with nutrient-rich resource islands surrounded by nutrient depleted bare soil patches. The differential trapping of soil particles by vegetation canopies may result in textural changes beneath the vegetation, which, in turn, can alter the hydrological processes such as infiltration and runoff. We conducted infiltration experiments and soil grain size analysis of several shrub (Larrea tridentate) and grass (Bouteloua eriopoda) microsites and in a heterogeneous landscape in the Chihuahuan desert (New Mexico, USA). Our results indicate heterogeneity in soil texture and infiltration patterns under grass and shrub microsites. We assessed the trapping effectiveness of vegetation canopies using a novel computational fluid dynamics (CFD) approach. An open-source software (OpenFOAM) was used to validate the data gathered from particle size distribution (PSD) analysis of soil within the shrub and grass microsites and their porosities (91% for shrub and 68% for grass) determined using terrestrial LiDAR surveys. Three-dimensional architectures of the shrub and grass were created using an open-source computer-aided design (CAD) software (Blender). The readily available solvers within the OpenFOAM architecture were modified to test the validity and optimize input parameters in assessing trapping efficiencies of sparse vegetation against aeolian sediment flux. The results from the numerical simulations explained the observed textual changes under grass and shrub canopies and highlighted the role of sediment trapping by canopies in structuring patch-scale hydrological processes.

  9. Apache Open Climate Workbench: Building Open Source Climate Science Tools and Community at the Apache Software Foundation

    NASA Astrophysics Data System (ADS)

    Joyce, M.; Ramirez, P.; Boustani, M.; Mattmann, C. A.; Khudikyan, S.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Apache Open Climate Workbench (OCW; https://climate.apache.org/) is a Top-Level Project at the Apache Software Foundation that aims to provide a suite of tools for performing climate science evaluations using model outputs from a multitude of different sources (ESGF, CORDEX, U.S. NCA, NARCCAP) with remote sensing data from NASA, NOAA, and other agencies. Apache OCW is the second NASA project to become a Top-Level Project at the Apache Software Foundation. It grew out of the Jet Propulsion Laboratory's (JPL) Regional Climate Model Evaluation System (RCMES) project, a collaboration between JPL and the University of California, Los Angeles' Joint Institute for Regional Earth System Science and Engineering (JIFRESSE). Apache OCW provides scientists and developers with tools for data manipulation, metrics for dataset comparisons, and a visualization suite. In addition to a powerful low-level API, Apache OCW also supports a web application for quick, browser-controlled evaluations, a command line application for local evaluations, and a virtual machine for isolated experimentation with minimal setup. This talk will look at the difficulties and successes of moving a closed community research project out into the wild world of open source. We'll explore the growing pains Apache OCW went through to become a Top-Level Project at the Apache Software Foundation as well as the benefits gained by opening up development to the broader climate and computer science communities.

  10. Generation of Single Photons and Entangled Photon Pairs from a Quantum Dot

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Pelton, M.; Santori, C.; Solomon, G. S.

    2002-10-01

    Current quantum cryptography systems are limited by the Poissonian photon statistics of a standard light source: a security loophole is opened up by the possibility of multiple-photon pulses. By replacing the source with a single-photon emitter, transmission rates of secure information can be improved. A single photon source is also essential to implement a linear optics quantum computer. We have investigated the use of single self-assembled InAs/GaAs quantum dots as such single-photon sources, and have seen a hundred-fold reduction in the multi-photon probability as compared to Poissonian pulses. An extension of our experiment should also allow for the generation of triggered, polarizationentangled photon pairs.

  11. OpenSeesPy: Python library for the OpenSees finite element framework

    NASA Astrophysics Data System (ADS)

    Zhu, Minjie; McKenna, Frank; Scott, Michael H.

    2018-01-01

    OpenSees, an open source finite element software framework, has been used broadly in the earthquake engineering community for simulating the seismic response of structural and geotechnical systems. The framework allows users to perform finite element analysis with a scripting language and for developers to create both serial and parallel finite element computer applications as interpreters. For the last 15 years, Tcl has been the primary scripting language to which the model building and analysis modules of OpenSees are linked. To provide users with different scripting language options, particularly Python, the OpenSees interpreter interface was refactored to provide multi-interpreter capabilities. This refactoring, resulting in the creation of OpenSeesPy as a Python module, is accomplished through an abstract interface for interpreter calls with concrete implementations for different scripting languages. Through this approach, users are able to develop applications that utilize the unique features of several scripting languages while taking advantage of advanced finite element analysis models and algorithms.

  12. Open source libraries and frameworks for mass spectrometry based proteomics: A developer's perspective☆

    PubMed Central

    Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio

    2014-01-01

    Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467006

  13. PyFLOWGO: An open-source platform for simulation of channelized lava thermo-rheological properties

    NASA Astrophysics Data System (ADS)

    Chevrel, Magdalena Oryaëlle; Labroquère, Jérémie; Harris, Andrew J. L.; Rowland, Scott K.

    2018-02-01

    Lava flow advance can be modeled through tracking the evolution of the thermo-rheological properties of a control volume of lava as it cools and crystallizes. An example of such a model was conceived by Harris and Rowland (2001) who developed a 1-D model, FLOWGO, in which the velocity of a control volume flowing down a channel depends on rheological properties computed following the thermal path estimated via a heat balance box model. We provide here an updated version of FLOWGO written in Python that is an open-source, modern and flexible language. Our software, named PyFLOWGO, allows selection of heat fluxes and rheological models of the user's choice to simulate the thermo-rheological evolution of the lava control volume. We describe its architecture which offers more flexibility while reducing the risk of making error when changing models in comparison to the previous FLOWGO version. Three cases are tested using actual data from channel-fed lava flow systems and results are discussed in terms of model validation and convergence. PyFLOWGO is open-source and packaged in a Python library to be imported and reused in any Python program (https://github.com/pyflowgo/pyflowgo)

  14. Open source libraries and frameworks for mass spectrometry based proteomics: a developer's perspective.

    PubMed

    Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio

    2014-01-01

    Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Is Multitask Deep Learning Practical for Pharma?

    PubMed

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  16. Low-cost embedded systems for democratizing ocean sensor technology in the coastal zone

    NASA Astrophysics Data System (ADS)

    Glazer, B. T.; Lio, H. I.

    2017-12-01

    Environmental sciences suffer from undersampling. Enabling sustained and unattended data collection in the coastal zone typically involves expensive instrumentation and infrastructure deployed as cabled observatories or moorings with little flexibility in deployment location following initial installation. High costs of commercially-available or custom instruments have limited the number of sensor sites that can be targeted by academic researchers, and have also limited engagement with the public. We have developed a novel, low-cost, open-source sensor and software platform to enable wireless data transfer of biogeochemical sensors in the coastal zone. The platform is centered upon widely available, low-cost, single board computers and microcontrollers. We have used a blend of on-hand research-grade sensors and low-cost open-source electronics that can be assembled by tech-savvy non-engineers. Robust, open-source code that remains customizable for specific miniNode configurations can match a specific site's measurement needs, depending on the scientific research priorities. We have demonstrated prototype capabilities and versatility through lab testing and field deployments of multiple sensor nodes with multiple sensor inputs, all of which are streaming near-real-time data from Kaneohe Bay over wireless RF links to a shore-based base station.

  17. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  18. CAMPAIGN: an open-source library of GPU-accelerated data clustering algorithms.

    PubMed

    Kohlhoff, Kai J; Sosnick, Marc H; Hsu, William T; Pande, Vijay S; Altman, Russ B

    2011-08-15

    Data clustering techniques are an essential component of a good data analysis toolbox. Many current bioinformatics applications are inherently compute-intense and work with very large datasets. Sequential algorithms are inadequate for providing the necessary performance. For this reason, we have created Clustering Algorithms for Massively Parallel Architectures, Including GPU Nodes (CAMPAIGN), a central resource for data clustering algorithms and tools that are implemented specifically for execution on massively parallel processing architectures. CAMPAIGN is a library of data clustering algorithms and tools, written in 'C for CUDA' for Nvidia GPUs. The library provides up to two orders of magnitude speed-up over respective CPU-based clustering algorithms and is intended as an open-source resource. New modules from the community will be accepted into the library and the layout of it is such that it can easily be extended to promising future platforms such as OpenCL. Releases of the CAMPAIGN library are freely available for download under the LGPL from https://simtk.org/home/campaign. Source code can also be obtained through anonymous subversion access as described on https://simtk.org/scm/?group_id=453. kjk33@cantab.net.

  19. Global Dynamic Exposure and the OpenBuildingMap

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.

    2015-12-01

    Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.

  20. pyJac: Analytical Jacobian generator for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen

    2017-06-01

    Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.

  1. XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.

    PubMed

    Ching, Daniel J; Gürsoy, Dogˇa

    2017-03-01

    The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  2. Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  3. XDesign: An open-source software package for designing X-ray imaging phantoms and experiments

    DOE PAGES

    Ching, Daniel J.; Gursoy, Dogˇa

    2017-02-21

    Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  4. Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspesi, G; Bai, J; Deese, R

    2015-05-12

    Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.

  5. Next Generation Transport Phenomenology Model

    NASA Technical Reports Server (NTRS)

    Strickland, Douglas J.; Knight, Harold; Evans, J. Scott

    2004-01-01

    This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.

  6. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Visualization and processing of computed solid-state NMR parameters: MagresView and MagresPython.

    PubMed

    Sturniolo, Simone; Green, Timothy F G; Hanson, Robert M; Zilka, Miri; Refson, Keith; Hodgkinson, Paul; Brown, Steven P; Yates, Jonathan R

    2016-09-01

    We introduce two open source tools to aid the processing and visualisation of ab-initio computed solid-state NMR parameters. The Magres file format for computed NMR parameters (as implemented in CASTEP v8.0 and QuantumEspresso v5.0.0) is implemented. MagresView is built upon the widely used Jmol crystal viewer, and provides an intuitive environment to display computed NMR parameters. It can provide simple pictorial representation of one- and two-dimensional NMR spectra as well as output a selected spin-system for exact simulations with dedicated spin-dynamics software. MagresPython provides a simple scripting environment to manipulate large numbers of computed NMR parameters to search for structural correlations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Open source tracking and analysis of adult Drosophila locomotion in Buridan's paradigm with and without visual targets.

    PubMed

    Colomb, Julien; Reiter, Lutz; Blaszkiewicz, Jedrzej; Wessnitzer, Jan; Brembs, Bjoern

    2012-01-01

    Insects have been among the most widely used model systems for studying the control of locomotion by nervous systems. In Drosophila, we implemented a simple test for locomotion: in Buridan's paradigm, flies walk back and forth between two inaccessible visual targets [1]. Until today, the lack of easily accessible tools for tracking the fly position and analyzing its trajectory has probably contributed to the slow acceptance of Buridan's paradigm. We present here a package of open source software designed to track a single animal walking in a homogenous environment (Buritrack) and to analyze its trajectory. The Centroid Trajectory Analysis (CeTrAn) software is coded in the open source statistics project R. It extracts eleven metrics and includes correlation analyses and a Principal Components Analysis (PCA). It was designed to be easily customized to personal requirements. In combination with inexpensive hardware, these tools can readily be used for teaching and research purposes. We demonstrate the capabilities of our package by measuring the locomotor behavior of adult Drosophila melanogaster (whose wings were clipped), either in the presence or in the absence of visual targets, and comparing the latter to different computer-generated data. The analysis of the trajectories confirms that flies are centrophobic and shows that inaccessible visual targets can alter the orientation of the flies without changing their overall patterns of activity. Using computer generated data, the analysis software was tested, and chance values for some metrics (as well as chance value for their correlation) were set. Our results prompt the hypothesis that fixation behavior is observed only if negative phototaxis can overcome the propensity of the flies to avoid the center of the platform. Together with our companion paper, we provide new tools to promote Open Science as well as the collection and analysis of digital behavioral data.

  9. Leveraging OpenStudio's Application Programming Interfaces: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, N.; Ball, B.; Goldwasser, D.

    2013-11-01

    OpenStudio development efforts have been focused on providing Application Programming Interfaces (APIs) where users are able to extend OpenStudio without the need to compile the open source libraries. This paper will discuss the basic purposes and functionalities of the core libraries that have been wrapped with APIs including the Building Model, Results Processing, Advanced Analysis, UncertaintyQuantification, and Data Interoperability through Translators. Several building energy modeling applications have been produced using OpenStudio's API and Software Development Kits (SDK) including the United States Department of Energy's Asset ScoreCalculator, a mobile-based audit tool, an energy design assistance reporting protocol, and a portfolio scalemore » incentive optimization analysismethodology. Each of these software applications will be discussed briefly and will describe how the APIs were leveraged for various uses including high-level modeling, data transformations from detailed building audits, error checking/quality assurance of models, and use of high-performance computing for mass simulations.« less

  10. Virtual Labs (Science Gateways) as platforms for Free and Open Source Science

    NASA Astrophysics Data System (ADS)

    Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey

    2016-04-01

    The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.

  11. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  12. Cloud-based Jupyter Notebooks for Water Data Analysis

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Brazil, L.; Seul, M.

    2017-12-01

    The development and adoption of technologies by the water science community to improve our ability to openly collaborate and share workflows will have a transformative impact on how we address the challenges associated with collaborative and reproducible scientific research. Jupyter notebooks offer one solution by providing an open-source platform for creating metadata-rich toolchains for modeling and data analysis applications. Adoption of this technology within the water sciences, coupled with publicly available datasets from agencies such as USGS, NASA, and EPA enables researchers to easily prototype and execute data intensive toolchains. Moreover, implementing this software stack in a cloud-based environment extends its native functionality to provide researchers a mechanism to build and execute toolchains that are too large or computationally demanding for typical desktop computers. Additionally, this cloud-based solution enables scientists to disseminate data processing routines alongside journal publications in an effort to support reproducibility. For example, these data collection and analysis toolchains can be shared, archived, and published using the HydroShare platform or downloaded and executed locally to reproduce scientific analysis. This work presents the design and implementation of a cloud-based Jupyter environment and its application for collecting, aggregating, and munging various datasets in a transparent, sharable, and self-documented manner. The goals of this work are to establish a free and open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, (2) utilize high performance libraries, models, and routines within a pre-configured cloud environment, and (3) enable dissemination of research products. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  13. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  14. ISAMBARD: an open-source computational environment for biomolecular analysis, modelling and design.

    PubMed

    Wood, Christopher W; Heal, Jack W; Thomson, Andrew R; Bartlett, Gail J; Ibarra, Amaurys Á; Brady, R Leo; Sessions, Richard B; Woolfson, Derek N

    2017-10-01

    The rational design of biomolecules is becoming a reality. However, further computational tools are needed to facilitate and accelerate this, and to make it accessible to more users. Here we introduce ISAMBARD, a tool for structural analysis, model building and rational design of biomolecules. ISAMBARD is open-source, modular, computationally scalable and intuitive to use. These features allow non-experts to explore biomolecular design in silico. ISAMBARD addresses a standing issue in protein design, namely, how to introduce backbone variability in a controlled manner. This is achieved through the generalization of tools for parametric modelling, describing the overall shape of proteins geometrically, and without input from experimentally determined structures. This will allow backbone conformations for entire folds and assemblies not observed in nature to be generated de novo, that is, to access the 'dark matter of protein-fold space'. We anticipate that ISAMBARD will find broad applications in biomolecular design, biotechnology and synthetic biology. A current stable build can be downloaded from the python package index (https://pypi.python.org/pypi/isambard/) with development builds available on GitHub (https://github.com/woolfson-group/) along with documentation, tutorial material and all the scripts used to generate the data described in this paper. d.n.woolfson@bristol.ac.uk or chris.wood@bristol.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  15. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  16. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  17. Study of recreational land and open space using Skylab imagery

    NASA Technical Reports Server (NTRS)

    Sattinger, I. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. An analysis of the statistical uniqueness of each of the signatures of the Gratiot-Saginaw State Game Area was made by computing a matrix of probabilities of misclassification for all possible signature pairs. Within each data set, the 35 signatures were then aggregated into a smaller set of composite signatures by combining groups of signatures having high probabilities of misclassification. Computer separation of forest denisty classes was poor with multispectral scanner data collected on 5 August 1973. Signatures from the scanner data were further analyzed to determine the ranking of spectral channels for computer separation of the scene classes. Probabilities of misclassification were computed for composite signatures using four separate combinations of data source and channel selection.

  18. Using CamiTK for rapid prototyping of interactive computer assisted medical intervention applications.

    PubMed

    Promayon, Emmanuel; Fouard, Céline; Bailet, Mathieu; Deram, Aurélien; Fiard, Gaëlle; Hungr, Nikolai; Luboz, Vincent; Payan, Yohan; Sarrazin, Johan; Saubat, Nicolas; Selmi, Sonia Yuki; Voros, Sandrine; Cinquin, Philippe; Troccaz, Jocelyne

    2013-01-01

    Computer Assisted Medical Intervention (CAMI hereafter) is a complex multi-disciplinary field. CAMI research requires the collaboration of experts in several fields as diverse as medicine, computer science, mathematics, instrumentation, signal processing, mechanics, modeling, automatics, optics, etc. CamiTK is a modular framework that helps researchers and clinicians to collaborate together in order to prototype CAMI applications by regrouping the knowledge and expertise from each discipline. It is an open-source, cross-platform generic and modular tool written in C++ which can handle medical images, surgical navigation, biomedicals simulations and robot control. This paper presents the Computer Assisted Medical Intervention ToolKit (CamiTK) and how it is used in various applications in our research team.

  19. A Simple Technique for Securing Data at Rest Stored in a Computing Cloud

    NASA Astrophysics Data System (ADS)

    Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai

    "Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.

  20. OpenDA Open Source Generic Data Assimilation Environment and its Application in Process Models

    NASA Astrophysics Data System (ADS)

    El Serafy, Ghada; Verlaan, Martin; Hummel, Stef; Weerts, Albrecht; Dhondia, Juzer

    2010-05-01

    Data Assimilation techniques are essential elements in state-of-the-art development of models and their optimization with data in the field of groundwater, surface water and soil systems. They are essential tools in calibration of complex modelling systems and improvement of model forecasts. The OpenDA is a new and generic open source data assimilation environment for application to a choice of physical process models, applied to case dependent domains. OpenDA was introduced recently when the developers of Costa, an open-source TU Delft project [http://www.costapse.org; Van Velzen and Verlaan; 2007] and those of the DATools from the former WL|Delft Hydraulics [El Serafy et al 2007; Weerts et al. 2009] decided to join forces. OpenDA makes use of a set of interfaces that describe the interaction between models, observations and data assimilation algorithms. It focuses on flexible applications in portable systems for modelling geophysical processes. It provides a generic interfacing protocol that allows combination of the implemented data assimilation techniques with, in principle, any time-stepping model duscribing a process(atmospheric processes, 3D circulation, 2D water level, sea surface temperature, soil systems, groundwater etc.). Presently, OpenDA features filtering techniques and calibration techniques. The presentation will give an overview of the OpenDA and the results of some of its practical applications. Application of data assimilation in portable operational forecasting systems—the DATools assimilation environment, El Serafy G.Y., H. Gerritsen, S. Hummel, A. H. Weerts, A.E. Mynett and M. Tanaka (2007), Journal of Ocean Dynamics, DOI 10.1007/s10236-007-0124-3, pp.485-499. COSTA a problem solving environment for data assimilation applied for hydrodynamical modelling, Van Velzen and Verlaan (2007), Meteorologische Zeitschrift, Volume 16, Number 6, December 2007 , pp. 777-793(17). Application of generic data assimilation tools (DATools) for flood forecasting purposes, A.H. Weerts, G.Y.H. El Serafy, S. Hummel, J. Dhondia, and H. Gerritsen (2009), accepted by Geoscience & Computers.

  1. A new chapter in environmental sensing: The Open-Source Published Environmental Sensing (OPENS) laboratory

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.

    2015-12-01

    The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.

  2. Point source moving above a finite impedance reflecting plane - Experiment and theory

    NASA Technical Reports Server (NTRS)

    Norum, T. D.; Liu, C. H.

    1978-01-01

    A widely used experimental version of the acoustic monopole consists of an acoustic driver of restricted opening forced by a discrete frequency oscillator. To investigate the effects of forward motion on this source, it was mounted above an automobile and driven over an asphalt surface at constant speed past a microphone array. The shapes of the received signal were compared to results computed from an analysis of a fluctuating-mass-type point source moving above a finite impedance reflecting plane. Good agreement was found between experiment and theory when a complex normal impedance representative of a fairly hard acoustic surface was used in the analysis.

  3. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  4. BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis.

    PubMed

    Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren

    2017-10-01

    BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.

  5. elevatr: Access Elevation Data from Various APIs | Science ...

    EPA Pesticide Factsheets

    Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. Currently, the package supports access to the Mapzen Elevation Service, Mapzen Terrain Service, and the USGS Elevation Point Query Service. The R language for statistical computing is increasingly used for spatial data analysis . This R package, elevatr, is in response to this and provides access to elevation data from various sources directly in R. The impact of `elevatr` is that it will 1) facilitate spatial analysis in R by providing access to foundational dataset for many types of analyses (e.g. hydrology, limnology) 2) open up a new set of users and uses for APIs widely used outside of R, and 3) provide an excellent example federal open source development as promoted by the Federal Source Code Policy (https://sourcecode.cio.gov/).

  6. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  7. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    PubMed

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software.

  8. A Kernel for Open Source Drug Discovery in Tropical Diseases

    PubMed Central

    Ortí, Leticia; Carbajo, Rodrigo J.; Pieper, Ursula; Eswar, Narayanan; Maurer, Stephen M.; Rai, Arti K.; Taylor, Ginger; Todd, Matthew H.; Pineda-Lucena, Antonio; Sali, Andrej; Marti-Renom, Marc A.

    2009-01-01

    Background Conventional patent-based drug development incentives work badly for the developing world, where commercial markets are usually small to non-existent. For this reason, the past decade has seen extensive experimentation with alternative R&D institutions ranging from private–public partnerships to development prizes. Despite extensive discussion, however, one of the most promising avenues—open source drug discovery—has remained elusive. We argue that the stumbling block has been the absence of a critical mass of preexisting work that volunteers can improve through a series of granular contributions. Historically, open source software collaborations have almost never succeeded without such “kernels”. Methodology/Principal Findings Here, we use a computational pipeline for: (i) comparative structure modeling of target proteins, (ii) predicting the localization of ligand binding sites on their surfaces, and (iii) assessing the similarity of the predicted ligands to known drugs. Our kernel currently contains 143 and 297 protein targets from ten pathogen genomes that are predicted to bind a known drug or a molecule similar to a known drug, respectively. The kernel provides a source of potential drug targets and drug candidates around which an online open source community can nucleate. Using NMR spectroscopy, we have experimentally tested our predictions for two of these targets, confirming one and invalidating the other. Conclusions/Significance The TDI kernel, which is being offered under the Creative Commons attribution share-alike license for free and unrestricted use, can be accessed on the World Wide Web at http://www.tropicaldisease.org. We hope that the kernel will facilitate collaborative efforts towards the discovery of new drugs against parasites that cause tropical diseases. PMID:19381286

  9. Biopython: freely available Python tools for computational molecular biology and bioinformatics.

    PubMed

    Cock, Peter J A; Antao, Tiago; Chang, Jeffrey T; Chapman, Brad A; Cox, Cymon J; Dalke, Andrew; Friedberg, Iddo; Hamelryck, Thomas; Kauff, Frank; Wilczynski, Bartek; de Hoon, Michiel J L

    2009-06-01

    The Biopython project is a mature open source international collaboration of volunteer developers, providing Python libraries for a wide range of bioinformatics problems. Biopython includes modules for reading and writing different sequence file formats and multiple sequence alignments, dealing with 3D macro molecular structures, interacting with common tools such as BLAST, ClustalW and EMBOSS, accessing key online databases, as well as providing numerical methods for statistical learning. Biopython is freely available, with documentation and source code at (www.biopython.org) under the Biopython license.

  10. Virtualized Networks and Virtualized Optical Line Terminal (vOLT)

    NASA Astrophysics Data System (ADS)

    Ma, Jonathan; Israel, Stephen

    2017-03-01

    The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.

  11. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  12. Implementation of Web-Based Education in Egypt through Cloud Computing Technologies and Its Effect on Higher Education

    ERIC Educational Resources Information Center

    El-Seoud, M. Samir Abou; El-Sofany, Hosam F.; Taj-Eddin, Islam A. T. F.; Nosseir, Ann; El-Khouly, Mahmoud M.

    2013-01-01

    The information technology educational programs at most universities in Egypt face many obstacles that can be overcome using technology enhanced learning. An open source Moodle eLearning platform has been implemented at many public and private universities in Egypt, as an aid to deliver e-content and to provide the institution with various…

  13. Development of an Innovative Interactive Virtual Classroom System for K-12 Education Using Google App Engine

    ERIC Educational Resources Information Center

    Mumba, Frackson; Zhu, Mengxia

    2013-01-01

    This paper presents a Simulation-based interactive Virtual ClassRoom web system (SVCR: www.vclasie.com) powered by the state-of-the-art cloud computing technology from Google SVCR integrates popular free open-source math, science and engineering simulations and provides functions such as secure user access control and management of courses,…

  14. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  15. Development and Use of an Adaptive Learning Environment to Research Online Study Behaviour

    ERIC Educational Resources Information Center

    Jonsdottir, Anna Helga; Jakobsdottir, Audbjorg; Stefansson, Gunnar

    2015-01-01

    This paper describes a system for research on the behaviour of students taking online drills. The system is accessible and free to use for anyone with web access. Based on open source software, the teaching material is licensed under a Creative Commons License. The system has been used for computer-assisted education in statistics, mathematics and…

  16. Digital data collection in paleoanthropology.

    PubMed

    Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay

    2015-01-01

    Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.

  17. Nektar++: An open-source spectral/ hp element framework

    NASA Astrophysics Data System (ADS)

    Cantwell, C. D.; Moxey, D.; Comerford, A.; Bolis, A.; Rocco, G.; Mengaldo, G.; De Grazia, D.; Yakovlev, S.; Lombard, J.-E.; Ekelschot, D.; Jordi, B.; Xu, H.; Mohamied, Y.; Eskilsson, C.; Nelson, B.; Vos, P.; Biotto, C.; Kirby, R. M.; Sherwin, S. J.

    2015-07-01

    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/ hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy over low-order techniques at reduced computational cost for a given number of degrees of freedom. However, their proliferation is often limited by their complexity, which makes these methods challenging to implement and use. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectral/ hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.

  18. a Free and Open Source Tool to Assess the Accuracy of Land Cover Maps: Implementation and Application to Lombardy Region (italy)

    NASA Astrophysics Data System (ADS)

    Bratic, G.; Brovelli, M. A.; Molinari, M. E.

    2018-04-01

    The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.

  19. Open-Source Software for Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Oyafuso, Fabiano; Hua, Hook; Tisdale, Edwin; Hart, Don

    2004-01-01

    The Nanoelectronic Modeling 3-D (NEMO 3-D) computer program has been upgraded to open-source status through elimination of license-restricted components. The present version functions equivalently to the version reported in "Software for Numerical Modeling of Nanoelectronic Devices" (NPO-30520), NASA Tech Briefs, Vol. 27, No. 11 (November 2003), page 37. To recapitulate: NEMO 3-D performs numerical modeling of the electronic transport and structural properties of a semiconductor device that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantum-mechanical behavior of the device resolved to the atomistic level of granularity. NEMO 3-D solves the applicable quantum matrix equation on a Beowulf-class cluster computer by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. A prior upgrade of NEMO 3-D incorporated a capability for a strain treatment, parameterized for bulk material properties of GaAs and InAs, for two tight-binding submodels. NEMO 3-D has been demonstrated in atomistic analyses of effects of disorder in alloys and, in particular, in bulk In(x)Ga(1-x)As and in In(0.6)Ga(0.4)As quantum dots.

  20. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

Top