Science.gov

Sample records for open-source cross-platform multi-modal

  1. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  2. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  3. OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis

    NASA Astrophysics Data System (ADS)

    Grohmann, C. H.; Campanha, G. A.

    2010-12-01

    Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of

  4. A new, open-source, multi-modality digital breast phantom

    NASA Astrophysics Data System (ADS)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  5. An open-source and cross-platform framework for Brain Computer Interface-guided robotic arm control

    PubMed Central

    Kubben, Pieter L.; Pouratian, Nader

    2012-01-01

    Brain Computer Interfaces (BCIs) have focused on several areas, of which motor substitution has received particular interest. Whereas open-source BCI software is available to facilitate cost-effective collaboration between research groups, it mainly focuses on communication and computer control. We developed an open-source and cross-platform framework, which works with cost-effective equipment that allows researchers to enter the field of BCI-based motor substitution without major investments upfront. It is based on the C++ programming language and the Qt framework, and offers a separate class for custom MATLAB/Simulink scripts. It has been tested using a 14-channel wireless electroencephalography (EEG) device and a low-cost robotic arm that offers 5° of freedom. The software contains four modules to control the robotic arm, one of which receives input from the EEG device. Strengths, current limitations, and future developments will be discussed. PMID:23372966

  6. An open-source and cross-platform framework for Brain Computer Interface-guided robotic arm control.

    PubMed

    Kubben, Pieter L; Pouratian, Nader

    2012-01-01

    Brain Computer Interfaces (BCIs) have focused on several areas, of which motor substitution has received particular interest. Whereas open-source BCI software is available to facilitate cost-effective collaboration between research groups, it mainly focuses on communication and computer control. We developed an open-source and cross-platform framework, which works with cost-effective equipment that allows researchers to enter the field of BCI-based motor substitution without major investments upfront. It is based on the C++ programming language and the Qt framework, and offers a separate class for custom MATLAB/Simulink scripts. It has been tested using a 14-channel wireless electroencephalography (EEG) device and a low-cost robotic arm that offers 5° of freedom. The software contains four modules to control the robotic arm, one of which receives input from the EEG device. Strengths, current limitations, and future developments will be discussed. PMID:23372966

  7. Avogadro: Free, Open Source, Cross-Platform Computer Program for Building Molecules and Visualizing Structure

    NASA Astrophysics Data System (ADS)

    Hanwell, Marcus; Hutchison, Geoffrey

    2009-03-01

    The Avogadro project is a free, open source approach to building chemical structures. It has integrated analysis, and three-dimensional visualization capabilities. Avogadro also uses external packages to perform quantum structure calculations. The work presented here illustrates a novel approach to working with the results of quantum calculations by visualizing possible molecular orbitals and allowing the user to select orbitals of interest. The Avogadro program allows the user to prepare new jobs for various quantum codes such as GAMESS-US, Q-Chem, Gaussian and Molpro. Due to the plugin based nature of the Avogadro project many specialized options can be added, such as raytracing the electronic structure of the molecule to produce high quality output, building carbon nanotube structures, or designing solid-state structures. Avogadro is already being used by educators and researchers. Due to the free and open source nature of the project, it can be readily downloaded and used by all students in and out of the classroom. It can also be tailored to particular institutions and/or courses.

  8. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    PubMed

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research. PMID:24258321

  9. OpenChrom: a cross-platform open source software for the mass spectrometric analysis of chromatographic data

    PubMed Central

    2010-01-01

    Background Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. Results This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. Conclusions OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net. PMID:20673335

  10. GeolOkit 1.0: a new Open Source, Cross-Platform software for geological data visualization in Google Earth environment

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Antoine; Bastin, Christophe; Watlet, Arnaud

    2016-04-01

    GIS software suites are today's essential tools to gather and visualise geological data, to apply spatial and temporal analysis and in fine, to create and share interactive maps for further geosciences' investigations. For these purposes, we developed GeolOkit: an open-source, freeware and lightweight software, written in Python, a high-level, cross-platform programming language. GeolOkit software is accessible through a graphical user interface, designed to run in parallel with Google Earth. It is a super user-friendly toolbox that allows 'geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to plot these one into Google Earth environment using KML code. This workflow requires no need of any third party software, except Google Earth itself. GeolOkit comes with large number of geosciences' labels, symbols, colours and placemarks and may process : (i) multi-points data, (ii) contours via several interpolations methods, (iii) discrete planar and linear structural data in 2D or 3D supporting large range of structures input format, (iv) clustered stereonets and rose diagram, (v) drawn cross-sections as vertical sections, (vi) georeferenced maps and vectors, (vii) field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS. We are looking for you to discover all the functionalities of GeolOkit software. As this project is under development, we are definitely looking to discussions regarding your proper needs, your ideas and contributions to GeolOkit project.

  11. Multi-Modality Phantom Development

    SciTech Connect

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe both our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.

  12. mDCC_tools: characterizing multi-modal atomic motions in molecular dynamics trajectories

    PubMed Central

    Kasahara, Kota; Mohan, Neetha; Fukuda, Ikuo; Nakamura, Haruki

    2016-01-01

    Summary: We previously reported the multi-modal Dynamic Cross Correlation (mDCC) method for analyzing molecular dynamics trajectories. This method quantifies the correlation coefficients of atomic motions with complex multi-modal behaviors by using a Bayesian-based pattern recognition technique that can effectively capture transiently formed, unstable interactions. Here, we present an open source toolkit for performing the mDCC analysis, including pattern recognitions, complex network analyses and visualizations. We include a tutorial document that thoroughly explains how to apply this toolkit for an analysis, using the example trajectory of the 100 ns simulation of an engineered endothelin-1 peptide dimer. Availability and implementation: The source code is available for free at http://www.protein.osaka-u.ac.jp/rcsfp/pi/mdcctools/, implemented in C ++ and Python, and supported on Linux. Contact: kota.kasahara@protein.osaka-u.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153575

  13. Multi Modal Anticipation in Fuzzy Space

    NASA Astrophysics Data System (ADS)

    Asproth, Viveca; Holmberg, Stig C.; Hâkansson, Anita

    2006-06-01

    We are all stakeholders in the geographical space, which makes up our common living and activity space. This means that a careful, creative, and anticipatory planning, design, and management of that space will be of paramount importance for our sustained life on earth. Here it is shown that the quality of such planning could be significantly increased with help of a computer based modelling and simulation tool. Further, the design and implementation of such a tool ought to be guided by the conceptual integration of some core concepts like anticipation and retardation, multi modal system modelling, fuzzy space modelling, and multi actor interaction.

  14. Quantitative multi-modal NDT data analysis

    SciTech Connect

    Heideklang, René; Shokouhi, Parisa

    2014-02-18

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.

  15. MoBILAB: an open source toolbox for analysis and visualization of mobile brain/body imaging data.

    PubMed

    Ojeda, Alejandro; Bigdely-Shamlo, Nima; Makeig, Scott

    2014-01-01

    A new paradigm for human brain imaging, mobile brain/body imaging (MoBI), involves synchronous collection of human brain activity (via electroencephalography, EEG) and behavior (via body motion capture, eye tracking, etc.), plus environmental events (scene and event recording) to study joint brain/body dynamics supporting natural human cognition supporting performance of naturally motivated human actions and interactions in 3-D environments (Makeig et al., 2009). Processing complex, concurrent, multi-modal, multi-rate data streams requires a signal-processing environment quite different from one designed to process single-modality time series data. Here we describe MoBILAB (more details available at sccn.ucsd.edu/wiki/MoBILAB), an open source, cross platform toolbox running on MATLAB (The Mathworks, Inc.) that supports analysis and visualization of any mixture of synchronously recorded brain, behavioral, and environmental time series plus time-marked event stream data. MoBILAB can serve as a pre-processing environment for adding behavioral and other event markers to EEG data for further processing, and/or as a development platform for expanded analysis of simultaneously recorded data streams. PMID:24634649

  16. PR-PR: Cross-Platform Laboratory Automation System

    SciTech Connect

    Linshiz, G; Stawski, N; Goyal, G; Bi, CH; Poust, S; Sharma, M; Mutalik, V; Keasling, JD; Hillson, NJ

    2014-08-01

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  17. Multi-modal image matching based on local frequency information

    NASA Astrophysics Data System (ADS)

    Liu, Xiaochun; Lei, Zhihui; Yu, Qifeng; Zhang, Xiaohu; Shang, Yang; Hou, Wang

    2013-12-01

    This paper challenges the issue of matching between multi-modal images with similar physical structures but different appearances. To emphasize the common structural information while suppressing the illumination and sensor-dependent information between multi-modal images, two image representations namely Mean Local Phase Angle (MLPA) and Frequency Spread Phase Congruency (FSPC) are proposed by using local frequency information in Log-Gabor wavelet transformation space. A confidence-aided similarity (CAS) that consists of a confidence component and a similarity component is designed to establish the correspondence between multi-modal images. The two representations are both invariant to contrast reversal and non-homogeneous illumination variation, and without any derivative or thresholding operation. The CAS that integrates MLPA with FSPC tightly instead of treating them separately can more weight the common structures emphasized by FSPC, and therefore further eliminate the influence of different sensor properties. We demonstrate the accuracy and robustness of our method by comparing it with those popular methods of multi-modal image matching. Experimental results show that our method improves the traditional multi-modal image matching, and can work robustly even in quite challenging situations (e.g. SAR & optical image).

  18. Cross platform development using Delphi and Kylix

    SciTech Connect

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  19. Multi-modality neuro-monitoring: conventional clinical trial design.

    PubMed

    Georgiadis, Alexandros L; Palesch, Yuko Y; Zygun, David; Hemphill, J Claude; Robertson, Claudia S; Leroux, Peter D; Suarez, Jose I

    2015-06-01

    Multi-modal monitoring has become an integral part of neurointensive care. However, our approach is at this time neither standardized nor backed by data from randomized controlled trials. The goal of the second Neurocritical Care Research Conference was to discuss research priorities in multi-modal monitoring, what research tools are available, as well as the latest advances in clinical trial design. This section of the meeting was focused on how such a trial should be designed so as to maximize yield and avoid mistakes of the past. PMID:25832350

  20. Creating Open Source Conversation

    ERIC Educational Resources Information Center

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  1. Open Source Vision

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.

  2. Utilizing Multi-Modal Literacies in Middle Grades Science

    ERIC Educational Resources Information Center

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  3. Multi-modal locomotion: from animal to application.

    PubMed

    Lock, R J; Burgess, S C; Vaidyanathan, R

    2014-03-01

    The majority of robotic vehicles that can be found today are bound to operations within a single media (i.e. land, air or water). This is very rarely the case when considering locomotive capabilities in natural systems. Utility for small robots often reflects the exact same problem domain as small animals, hence providing numerous avenues for biological inspiration. This paper begins to investigate the various modes of locomotion adopted by different genus groups in multiple media as an initial attempt to determine the compromise in ability adopted by the animals when achieving multi-modal locomotion. A review of current biologically inspired multi-modal robots is also presented. The primary aim of this research is to lay the foundation for a generation of vehicles capable of multi-modal locomotion, allowing ambulatory abilities in more than one media, surpassing current capabilities. By identifying and understanding when natural systems use specific locomotion mechanisms, when they opt for disparate mechanisms for each mode of locomotion rather than using a synergized singular mechanism, and how this affects their capability in each medium, similar combinations can be used as inspiration for future multi-modal biologically inspired robotic platforms. PMID:24343102

  4. Crux: rapid open source protein tandem mass spectrometry analysis.

    PubMed

    McIlwain, Sean; Tamura, Kaipo; Kertesz-Farkas, Attila; Grant, Charles E; Diament, Benjamin; Frewen, Barbara; Howbert, J Jeffry; Hoopmann, Michael R; Käll, Lukas; Eng, Jimmy K; MacCoss, Michael J; Noble, William Stafford

    2014-10-01

    Efficiently and accurately analyzing big protein tandem mass spectrometry data sets requires robust software that incorporates state-of-the-art computational, machine learning, and statistical methods. The Crux mass spectrometry analysis software toolkit ( http://cruxtoolkit.sourceforge.net ) is an open source project that aims to provide users with a cross-platform suite of analysis tools for interpreting protein mass spectrometry data. PMID:25182276

  5. A bioinspired multi-modal flying and walking robot.

    PubMed

    Daler, Ludovic; Mintchev, Stefano; Stefanini, Cesare; Floreano, Dario

    2015-01-01

    With the aim to extend the versatility and adaptability of robots in complex environments, a novel multi-modal flying and walking robot is presented. The robot consists of a flying wing with adaptive morphology that can perform both long distance flight and walking in cluttered environments for local exploration. The robot's design is inspired by the common vampire bat Desmodus rotundus, which can perform aerial and terrestrial locomotion with limited trade-offs. Wings' adaptive morphology allows the robot to modify the shape of its body in order to increase its efficiency during terrestrial locomotion. Furthermore, aerial and terrestrial capabilities are powered by a single locomotor apparatus, therefore it reduces the total complexity and weight of this multi-modal robot. PMID:25599118

  6. Combining Multi-modal Features for Social Media Analysis

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Spiros; Giannakidou, Eirini; Kompatsiaris, Ioannis; Patras, Ioannis; Vakali, Athena

    In this chapter we discuss methods for efficiently modeling the diverse information carried by social media. The problem is viewed as a multi-modal analysis process where specialized techniques are used to overcome the obstacles arising from the heterogeneity of data. Focusing at the optimal combination of low-level features (i.e., early fusion), we present a bio-inspired algorithm for feature selection that weights the features based on their appropriateness to represent a resource. Under the same objective of optimal feature combination we also examine the use of pLSA-based aspect models, as the means to define a latent semantic space where heterogeneous types of information can be effectively combined. Tagged images taken from social sites have been used in the characteristic scenarios of image clustering and retrieval, to demonstrate the benefits of multi-modal analysis in social media.

  7. MINERVA - A Multi-Modal Radiation Treatment Planning System

    SciTech Connect

    D. E. Wessol; C. A. Wemple; D. W. Nigg; J. J. Cogliati; M. L. Milvich; C. Frederickson; M. Perkins; G. A. Harkin

    2004-10-01

    Recently, research efforts have begun to examine the combination of BNCT with external beam photon radiotherapy (Barth et al. 2004). In order to properly prepare treatment plans for patients being treated with combinations of radiation modalities, appropriate planning tools must be available. To facilitiate this, researchers at the Idaho National Engineering and Environmental Laboratory (INEEL)and Montana State University (MSU) have undertaken development of a fully multi-modal radiation treatment planning system.

  8. Open Source in Education

    ERIC Educational Resources Information Center

    Lakhan, Shaheen E.; Jhunjhunwala, Kavita

    2008-01-01

    Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…

  9. Evaluating Open Source Portals

    ERIC Educational Resources Information Center

    Goh, Dion; Luyt, Brendan; Chua, Alton; Yee, See-Yong; Poh, Kia-Ngoh; Ng, How-Yeu

    2008-01-01

    Portals have become indispensable for organizations of all types trying to establish themselves on the Web. Unfortunately, there have only been a few evaluative studies of portal software and even fewer of open source portal software. This study aims to add to the available literature in this important area by proposing and testing a checklist for…

  10. Open-Source Colorimeter

    PubMed Central

    Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.

    2013-01-01

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032

  11. Open-Source GIS

    SciTech Connect

    Vatsavai, Raju; Burk, Thomas E; Lime, Steve

    2012-01-01

    The components making up an Open Source GIS are explained in this chapter. A map server (Sect. 30.1) can broadly be defined as a software platform for dynamically generating spatially referenced digital map products. The University of Minnesota MapServer (UMN Map Server) is one such system. Its basic features are visualization, overlay, and query. Section 30.2 names and explains many of the geospatial open source libraries, such as GDAL and OGR. The other libraries are FDO, JTS, GEOS, JCS, MetaCRS, and GPSBabel. The application examples include derived GIS-software and data format conversions. Quantum GIS, its origin and its applications explained in detail in Sect. 30.3. The features include a rich GUI, attribute tables, vector symbols, labeling, editing functions, projections, georeferencing, GPS support, analysis, and Web Map Server functionality. Future developments will address mobile applications, 3-D, and multithreading. The origins of PostgreSQL are outlined and PostGIS discussed in detail in Sect. 30.4. It extends PostgreSQL by implementing the Simple Feature standard. Section 30.5 details the most important open source licenses such as the GPL, the LGPL, the MIT License, and the BSD License, as well as the role of the Creative Commons.

  12. A multi-modal parcellation of human cerebral cortex.

    PubMed

    Glasser, Matthew F; Coalson, Timothy S; Robinson, Emma C; Hacker, Carl D; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-08-11

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease. PMID:27437579

  13. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  14. Multi-modal netted sensor fence for homeland security

    NASA Astrophysics Data System (ADS)

    Shi, Weiqun; Fante, Ronald; Yoder, John; Crawford, Gregory

    2005-05-01

    Potential terrorists/adversaries can exploit a wide range of airborne threats against civilian and military targets. Currently there is no effective, low-cost solution to robustly and reliably detect and identify low observable airborne vehicles such as small, low-flying aircraft or cruise missiles that might be carrying chemical, biological or even nuclear weapons in realistic environments. This paper describes the development of a forward-based fence that contains a multi-modal mix of various low cost, low power, netted sensors including unsophisticated radar, acoustic and optical (Infrared and visible) cameras to detect, track and discriminate such threats. Candidate target (Cessna, Beech Craft, crop duster, and cruise missile) signature phenomenologies are studied in detail through either theoretical, numerical simulation or field experiment. Assessments for all three modalities (Radar, acoustic and IR) indicate reasonable detectability and detection range. A multi-modal kinematic tracker is employed to predict the location, the speed and the heading of the target. Results from a notional, template based classification approach reveal reasonable discrimination between different aircraft tested in the field experiments.

  15. Denoising of Multi-Modal Images with PCA Self-Cross Bilateral Filter

    NASA Astrophysics Data System (ADS)

    Qiu, Yu; Urahama, Kiichi

    We present the PCA self-cross bilateral filter for denoising multi-modal images. We firstly apply the principal component analysis for input multi-modal images. We next smooth the first principal component with a preliminary filter and use it as a supplementary image for cross bilateral filtering of input images. Among some preliminary filters, the undecimated wavelet transform is useful for effective denoising of various multi-modal images such as color, multi-lighting and medical images.

  16. How Is Open Source Special?

    ERIC Educational Resources Information Center

    Kapor, Mitchell

    2005-01-01

    Open source software projects involve the production of goods, but in software projects, the "goods" consist of information. The open source model is an alternative to the conventional centralized, command-and-control way in which things are usually made. In contrast, open source projects are genuinely decentralized and transparent. Transparent…

  17. The origin of human multi-modal communication.

    PubMed

    Levinson, Stephen C; Holler, Judith

    2014-09-19

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system. PMID:25092670

  18. Plasmonic Gold Nanostars for Multi-Modality Sensing and Diagnostics

    PubMed Central

    Liu, Yang; Yuan, Hsiangkuo; Kersey, Farrell R.; Register, Janna K.; Parrott, Matthew C.; Vo-Dinh, Tuan

    2015-01-01

    Gold nanostars (AuNSs) are unique systems that can provide a novel multifunctional nanoplatform for molecular sensing and diagnostics. The plasmonic absorption band of AuNSs can be tuned to the near infrared spectral range, often referred to as the “tissue optical window”, where light exhibits minimal absorption and deep penetration in tissue. AuNSs have been applied for detecting disease biomarkers and for biomedical imaging using multi-modality methods including surface-enhanced Raman scattering (SERS), two-photon photoluminescence (TPL), magnetic resonance imaging (MRI), positron emission tomography (PET), and X-ray computer tomography (CT) imaging. In this paper, we provide an overview of the recent development of plasmonic AuNSs in our laboratory for biomedical applications and highlight their potential for future translational medicine as a multifunctional nanoplatform. PMID:25664431

  19. Passive multi-modal sensors for the urban environment

    NASA Astrophysics Data System (ADS)

    Ladas, Andrew; Frankel, Ronald

    2005-05-01

    The urban environment poses a great many obstacles for the modern soldier, from complex buildings and streets to unknown or hidden combatants and non-combatants. To provide improved situational awareness and short range protection, a variety of sensors and sensor systems are under investigation and development. In order to provide timely information from small, low-cost sensor systems, ARL has been investigating the use of passive multi-modal sensors for the individual soldier. These sensors will combine several different sensing modalities, and combine the information from these sensors at the sensor level. This will improve the sensors ability to discriminate targets, reduce false alarms and minimize the amount of information required to be transmitted to the user. In addition, passive sensors are inherently lower power and more covert than active systems. This report will detail the initial accomplishments, and present early data on several sensing modalities under investigation.

  20. Multi-modal cockpit interface for improved airport surface operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  1. Non-rigid multi-modal registration on the GPU

    NASA Astrophysics Data System (ADS)

    Vetter, Christoph; Guetter, Christoph; Xu, Chenyang; Westermann, Rüdiger

    2007-03-01

    Non-rigid multi-modal registration of images/volumes is becoming increasingly necessary in many medical settings. While efficient registration algorithms have been published, the speed of the solutions is a problem in clinical applications. Harnessing the computational power of graphics processing unit (GPU) for general purpose computations has become increasingly popular in order to speed up algorithms further, but the algorithms have to be adapted to the data-parallel, streaming model of the GPU. This paper describes the implementation of a non-rigid, multi-modal registration using mutual information and the Kullback-Leibler divergence between observed and learned joint intensity distributions. The entire registration process is implemented on the GPU, including a GPU-friendly computation of two-dimensional histograms using vertex texture fetches as well as an implementation of recursive Gaussian filtering on the GPU. Since the computation is performed on the GPU, interactive visualization of the registration process can be done without bus transfer between main memory and video memory. This allows the user to observe the registration process and to evaluate the result more easily. Two hybrid approaches distributing the computation between the GPU and CPU are discussed. The first approach uses the CPU for lower resolutions and the GPU for higher resolutions, the second approach uses the GPU to compute a first approximation to the registration that is used as starting point for registration on the CPU using double-precision. The results of the CPU implementation are compared to the different approaches using the GPU regarding speed as well as image quality. The GPU performs up to 5 times faster per iteration than the CPU implementation.

  2. Imaging quality assessment of multi-modal miniature microscope.

    PubMed

    Lee, Junwon; Rogers, Jeremy; Descour, Michael; Hsu, Elizabeth; Aaron, Jesse; Sokolov, Konstantin; Richards-Kortum, Rebecca

    2003-06-16

    We are developing a multi-modal miniature microscope (4M device) to image morphology and cytochemistry in vivo and provide better delineation of tumors. The 4M device is designed to be a complete microscope on a chip, including optical, micro-mechanical, and electronic components. It has advantages such as compact size and capability for microscopic-scale imaging. This paper presents an optics-only prototype 4M device, the very first imaging system made of sol-gel material. The microoptics used in the 4M device has a diameter of 1.3 mm. Metrology of the imaging quality assessment of the prototype device is presented. We describe causes of imaging performance degradation in order to improve the fabrication process. We built a multi-modal imaging test-bed to measure first-order properties and to assess the imaging quality of the 4M device. The 4M prototype has a field of view of 290 microm in diameter, a magnification of -3.9, a working distance of 250 microm and a depth of field of 29.6+/-6 microm. We report the modulation transfer function (MTF) of the 4M device as a quantitative metric of imaging quality. Based on the MTF data, we calculated a Strehl ratio of 0.59. In order to investigate the cause of imaging quality degradation, the surface characterization of lenses in 4M devices is measured and reported. We also imaged both polystyrene microspheres similar in size to epithelial cell nuclei and cervical cancer cells. Imaging results indicate that the 4M prototype can resolve cellular detail necessary for detection of precancer. PMID:19466016

  3. MEDCIS: Multi-Modality Epilepsy Data Capture and Integration System.

    PubMed

    Zhang, Guo-Qiang; Cui, Licong; Lhatoo, Samden; Schuele, Stephan U; Sahoo, Satya S

    2014-01-01

    Sudden Unexpected Death in Epilepsy (SUDEP) is the leading mode of epilepsy-related death and is most common in patients with intractable, frequent, and continuing seizures. A statistically significant cohort of patients for SUDEP study requires meticulous, prospective follow up of a large population that is at an elevated risk, best represented by the Epilepsy Monitoring Unit (EMU) patient population. Multiple EMUs need to collaborate, share data for building a larger cohort of potential SUDEP patient using a state-of-the-art informatics infrastructure. To address the challenges of data integration and data access from multiple EMUs, we developed the Multi-Modality Epilepsy Data Capture and Integration System (MEDCIS) that combines retrospective clinical free text processing using NLP, prospective structured data capture using an ontology-driven interface, interfaces for cohort search and signal visualization, all in a single integrated environment. A dedicated Epilepsy and Seizure Ontology (EpSO) has been used to streamline the user interfaces, enhance its usability, and enable mappings across distributed databases so that federated queries can be executed. MEDCIS contained 936 patient data sets from the EMUs of University Hospitals Case Medical Center (UH CMC) in Cleveland and Northwestern Memorial Hospital (NMH) in Chicago. Patients from UH CMC and NMH were stored in different databases and then federated through MEDCIS using EpSO and our mapping module. More than 77GB of multi-modal signal data were processed using the Cloudwave pipeline and made available for rendering through the web-interface. About 74% of the 40 open clinical questions of interest were answerable accurately using the EpSO-driven VISual AGregagator and Explorer (VISAGE) interface. Questions not directly answerable were either due to their inherent computational complexity, the unavailability of primary information, or the scope of concept that has been formulated in the existing Ep

  4. MEDCIS: Multi-Modality Epilepsy Data Capture and Integration System

    PubMed Central

    Zhang, Guo-Qiang; Cui, Licong; Lhatoo, Samden; Schuele, Stephan U.; Sahoo, Satya S.

    2014-01-01

    Sudden Unexpected Death in Epilepsy (SUDEP) is the leading mode of epilepsy-related death and is most common in patients with intractable, frequent, and continuing seizures. A statistically significant cohort of patients for SUDEP study requires meticulous, prospective follow up of a large population that is at an elevated risk, best represented by the Epilepsy Monitoring Unit (EMU) patient population. Multiple EMUs need to collaborate, share data for building a larger cohort of potential SUDEP patient using a state-of-the-art informatics infrastructure. To address the challenges of data integration and data access from multiple EMUs, we developed the Multi-Modality Epilepsy Data Capture and Integration System (MEDCIS) that combines retrospective clinical free text processing using NLP, prospective structured data capture using an ontology-driven interface, interfaces for cohort search and signal visualization, all in a single integrated environment. A dedicated Epilepsy and Seizure Ontology (EpSO) has been used to streamline the user interfaces, enhance its usability, and enable mappings across distributed databases so that federated queries can be executed. MEDCIS contained 936 patient data sets from the EMUs of University Hospitals Case Medical Center (UH CMC) in Cleveland and Northwestern Memorial Hospital (NMH) in Chicago. Patients from UH CMC and NMH were stored in different databases and then federated through MEDCIS using EpSO and our mapping module. More than 77GB of multi-modal signal data were processed using the Cloudwave pipeline and made available for rendering through the web-interface. About 74% of the 40 open clinical questions of interest were answerable accurately using the EpSO-driven VISual AGregagator and Explorer (VISAGE) interface. Questions not directly answerable were either due to their inherent computational complexity, the unavailability of primary information, or the scope of concept that has been formulated in the existing Ep

  5. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http

  6. Multi-modal antigen specific therapy for autoimmunity.

    PubMed

    Legge, K L; Bell, J J; Li, L; Gregg, R; Caprio, J C; Zaghouani, H

    2001-10-01

    Peripheral tolerance, represents an attractive strategy to down-regulate previously activated T cells and suppress an ongoing disease. Herein, immunoglobulins (Igs) were used to deliver self and altered self peptides for efficient peptide presentation without costimulation to test for modulation of experimental allergic encephalomyelitis (EAE). Accordingly, the encephalitogenic proteolipid protein (PLP) sequence 139-151 (referred to as PLP1) and an altered form of PLP1 known as PLP-LR were genetically expressed on Igs and the resulting Ig-PLP1 and Ig-PLP-LR were tested for efficient presentation of the peptides and for amelioration of ongoing EAE. Evidence is presented indicating that Ig-PLP1 as well as Ig-PLP-LR given in saline to mice with ongoing clinical EAE suppresses subsequent relapses. However, aggregation of both chimeras allows crosslinking of Fcgamma receptors (FcgammaRs) and induction of IL-10 production by APCs but does not promote the up-regulation of costimulatory molecules. Consequently, IL-10 displays bystander suppression and synergizes with presentation without costimulation to drive effective modulation of EAE. As Ig-PLP1 is more potent than Ig-PLP-LR in the down-regulation of T cells, we conclude that peptide affinity plays a critical role in this multi-modal approach of T cell modulation. PMID:11890614

  7. Multi-modal sensing using photoactive thin films

    NASA Astrophysics Data System (ADS)

    Ryu, Donghyeon; Loh, Kenneth J.

    2014-08-01

    The need for a reliable prognosis of the health of structural systems has promoted the development of sensing technologies capable of simultaneously detecting multiple types of damage. However, conventional sensors are designed to only measure a specific structural response (e.g., strain, displacement, or acceleration). This limitation forces one to use a wide variety of sensors densely instrumented on a given structure, which results in high overhead costs and requires extensive signal processing of raw sensor data. In this study, a photoactive thin film that has been engineered for multi-modal sensing to selectively detect strain and pH is proposed. In addition, the thin film is self-sensing in that it does not require external power to operate. Instead, light illumination causes the photoactive film to generate an electrical current, whose magnitude is directly related to applied strains (for deformations, impact or cracks) or pH (as a precursor of corrosion). First, the thin films were fabricated by spin-coating photoactive and conjugated polymers like poly(3-hexylthiophene) (P3HT). The thin film was also encoded with pH sensitivity by integrating polyaniline (PANI) as one component within the multilayered film architecture. Second, the optical response of the P3HT and PANI thin films subjected to applied strains or pH was characterized using absorption spectroscopy. Lastly, it was also verified that the thin films could selectively sense strain or pH depending on the wavelengths of light used for sensor interrogation.

  8. A versatile clearing agent for multi-modal brain imaging.

    PubMed

    Costantini, Irene; Ghobril, Jean-Pierre; Di Giovanna, Antonino Paolo; Allegra Mascaro, Anna Letizia; Silvestri, Ludovico; Müllenbroich, Marie Caroline; Onofri, Leonardo; Conti, Valerio; Vanzi, Francesco; Sacconi, Leonardo; Guerrini, Renzo; Markram, Henry; Iannello, Giulio; Pavone, Francesco Saverio

    2015-01-01

    Extensive mapping of neuronal connections in the central nervous system requires high-throughput µm-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multi-modal optical techniques. Here, we introduce a versatile brain clearing agent (2,2'-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue. PMID:25950610

  9. Multi-modality molecular imaging: pre-clinical laboratory configuration

    NASA Astrophysics Data System (ADS)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  10. Multi-modality systems for molecular tomographic imaging

    NASA Astrophysics Data System (ADS)

    Li, Mingze; Bai, Jing

    2009-11-01

    In vivo small animal imaging is a cornerstone in the study of human diseases by providing important clues on the pathogenesis, progression and treatment of many disorders. Molecular tomographic imaging can probe complex biologic interactions dynamically and to study diseases and treatment responses over time in the same animal. Current imaging technique including microCT, microMRI, microPET, microSPECT, microUS, BLT and FMT has its own advantages and applications, however, none of them can provide structural, functional and molecular information in one context. Multi-modality imaging, which utilizes the strengths of different modalities to provide a complete understanding of the object under investigation, emerges as an important alternative in small animal imaging. This article is to introduce the latest development of multimodality systems for small animal tomographic imaging. After a systematic review of imaging principles, systems and commerical products for each stand-alone method, we introduce some multimodality strategies in the latest years. In particular, two dual-modality systems, i.e. FMT-CT and FMT-PET are presented in detail. The end of this article concludes that though most multimodality systems are still in a laboratory research stage, they will surely undergo deep development and wide application in the near future.

  11. Ex-vivo multi-modal microscopy of healthy skin

    NASA Astrophysics Data System (ADS)

    Guevara, Edgar; Gutiérrez-Hernández, José Manuel; Castonguay, Alexandre; Lesage, Frédéric; González, Francisco Javier

    2014-09-01

    The thorough characterization of skin samples is a critical step in investigating dermatological diseases. The combination of depth-sensitive anatomical imaging with molecular imaging has the potential to provide vast information about the skin. In this proof-of-concept work we present high-resolution mosaic images of skin biopsies using Optical Coherence Tomography (OCT) manually co-registered with standard microscopy, bi-dimensional Raman spectral mapping and fluorescence imaging. A human breast skin sample, embedded in paraffin, was imaged with a swept-source OCT system at 1310 nm. Individual OCT volumes were acquired in fully automated fashion in order to obtain a large field-of-view at high resolution (~10μm). Based on anatomical features, the other three modalities were manually co-registered to the projected OCT volume, using an affine transformation. A drawback is the manual co-registration, which may limit the utility of this method. However, the results indicate that multiple imaging modalities provide complementary information about the sample. This pilot study suggests that multi-modal microscopy may be a valuable tool in the characterization of skin biopsies.

  12. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    PubMed

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497

  13. The origin of human multi-modal communication

    PubMed Central

    Levinson, Stephen C.; Holler, Judith

    2014-01-01

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system. PMID:25092670

  14. Multi-Modal Integrated Safety, Security & Environmental Program Strategy

    SciTech Connect

    Walker, Randy M; Omitaomu, Olufemi A; Ganguly, Auroop R; Abercrombie, Robert K; Sheldon, Frederick T

    2008-01-01

    This paper describes an approach to assessing and protecting the surface transportation infrastructure from a network science viewpoint. We address transportation security from a human behavior-dynamics perspective under both normal and emergency conditions for the purpose of measuring, managing and mitigating risks. The key factor for the planning and design of a robust transportation network solution is to ensure accountability for safety, security and environmental risks. The Oak Ridge National Laboratory (ORNL) Multi-Modal Integrated Safety, Security and Environmental Program (M2IS2EP) evolved from a joint US Department of Energy (DOE) Oak Ridge Office (ORO) Assets Utilization Program and ORNL SensorNet Program initiative named the Identification and Monitoring of Radiation (in commerce) Shipments (IMRicS). In November of 2002 the first of six pilot demonstrations was constructed at the Tennessee I-40/75 Knox County Weigh Station outside of Knoxville. Over the life of the project four more installations were deployed with various levels of ORNL oversight. In October of 2004 the ORNL SensorNet Program commissioned a research team to develop a project plan and to identify/develop a strategic vision in support of the SensorNet Program, keeping in mind the needs of the various governmental constituencies (i.e., DOT/DHS/EPA) for improving the safety/security/environment of the highway transportation system. Ultimately a more comprehensive ORNL SensorNet Program entitled Trusted Corridors was established and presented to ORNL, DOE, DOT, DHS, EPA and State leaders. Several of these entities adopted their own versions of these programs and are at various stages of deployment. All of these initiatives and pilots make up the foundation of the concepts and ideas of M2IS2EP and will be discussed further on in this paper.

  15. Deformable registration of multi-modal data including rigid structures

    SciTech Connect

    Huesman, Ronald H.; Klein, Gregory J.; Kimdon, Joey A.; Kuo, Chaincy; Majumdar, Sharmila

    2003-05-02

    Multi-modality imaging studies are becoming more widely utilized in the analysis of medical data. Anatomical data from CT and MRI are useful for analyzing or further processing functional data from techniques such as PET and SPECT. When data are not acquired simultaneously, even when these data are acquired on a dual-imaging device using the same bed, motion can occur that requires registration between the reconstructed image volumes. As the human torso can allow non-rigid motion, this type of motion should be estimated and corrected. We report a deformation registration technique that utilizes rigid registration for bony structures, while allowing elastic transformation of soft tissue to more accurately register the entire image volume. The technique is applied to the registration of CT and MR images of the lumbar spine. First a global rigid registration is performed to approximately align features. Bony structures are then segmented from the CT data using semi-automated process, and bounding boxes for each vertebra are established. Each CT subvolume is then individually registered to the MRI data using a piece-wise rigid registration algorithm and a mutual information image similarity measure. The resulting set of rigid transformations allows for accurate registration of the parts of the CT and MRI data representing the vertebrae, but not the adjacent soft tissue. To align the soft tissue, a smoothly-varying deformation is computed using a thin platespline(TPS) algorithm. The TPS technique requires a sparse set of landmarks that are to be brought into correspondence. These landmarks are automatically obtained from the segmented data using simple edge-detection techniques and random sampling from the edge candidates. A smoothness parameter is also included in the TPS formulation for characterization of the stiffness of the soft tissue. Estimation of an appropriate stiffness factor is obtained iteratively by using the mutual information cost function on the result

  16. A new region descriptor for multi-modal medical image registration and region detection.

    PubMed

    Xiaonan Wan; Dongdong Yu; Feng Yang; Caiyun Yang; Chengcai Leng; Min Xu; Jie Tian

    2015-08-01

    Establishing accurate anatomical correspondences plays a critical role in multi-modal medical image registration and region detection. Although many features based registration methods have been proposed to detect these correspondences, they are mostly based on the point descriptor which leads to high memory cost and could not represent local region information. In this paper, we propose a new region descriptor which depicts the features in each region, instead of in each point, as a vector. First, feature attributes of each point are extracted by a Gabor filter bank combined with a gradient filter. Then, the region descriptor is defined as the covariance of feature attributes of each point inside the region, based on which a cost function is constructed for multi-modal image registration. Finally, our proposed region descriptor is applied to both multi-modal region detection and similarity metric measurement in multi-modal image registration. Experiments demonstrate the feasibility and effectiveness of our proposed region descriptor. PMID:26736903

  17. An Open Source Simulation System

    NASA Technical Reports Server (NTRS)

    Slack, Thomas

    2005-01-01

    An investigation into the current state of the art of open source real time programming practices. This document includes what technologies are available, how easy is it to obtain, configure, and use them, and some performance measures done on the different systems. A matrix of vendors and their products is included as part of this investigation, but this is not an exhaustive list, and represents only a snapshot of time in a field that is changing rapidly. Specifically, there are three approaches investigated: 1. Completely open source on generic hardware, downloaded from the net. 2. Open source packaged by a vender and provided as free evaluation copy. 3. Proprietary hardware with pre-loaded proprietary source available software provided by the vender as for our evaluation.

  18. The Connectome Viewer Toolkit: An Open Source Framework to Manage, Analyze, and Visualize Connectomes

    PubMed Central

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110

  19. THE OPEN SOURCING OF EPANET

    EPA Science Inventory

    A proposal was made at the 2009 EWRI Congress in Kansas City, MO to establish an Open Source Project (OSP) for the widely used EPANET pipe network analysis program. This would be an ongoing collaborative effort among a group of geographically dispersed advisors and developers, wo...

  20. Cross-platform digital assessment forms for evaluating surgical skills

    PubMed Central

    2015-01-01

    A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations. PMID:25959653

  1. Open source layered sensing model

    NASA Astrophysics Data System (ADS)

    Rovito, Todd V.; Abayowa, Bernard O.; Talbert, Michael L.

    2011-06-01

    This paper will look at using open source tools (Blender [17], LuxRender [18], and Python [19]) to build an image processing model for exploring combinations of sensors/platforms for any given image resolution. The model produces camera position, camera attitude, and synthetic camera data that can be used for exploitation purposes. We focus on electro-optical (EO) visible sensors to simplify the rendering but this work could be extended to use other rendering tools that support different modalities. Due to the computational complexity of ray tracing we employ the Amazon Elastic Cloud Computer to help speed up the generation of large ray traced scenes. The key idea of the paper is to provide an architecture for layered sensing simulation which is modular in design and constructed on open-source off-the-shelf software. This architecture shows how leveraging existing open-source software allows for practical layered sensing modeling to be rapidly assimilated and utilized in real-world applications. In this paper we demonstrate our model output is automatically exploitable by using generated data with an innovative video frame mosaic algorithm.

  2. Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge

    ERIC Educational Resources Information Center

    Blown, Eric; Bryce, Tom G. K.

    2010-01-01

    The astronomy concepts of 345 young people were studied over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The purpose of the research was to search for evidence to clarify competing theories on "conceptual…

  3. Ultrasmall Biocompatible WO3- x Nanodots for Multi-Modality Imaging and Combined Therapy of Cancers.

    PubMed

    Wen, Ling; Chen, Ling; Zheng, Shimin; Zeng, Jianfeng; Duan, Guangxin; Wang, Yong; Wang, Guanglin; Chai, Zhifang; Li, Zhen; Gao, Mingyuan

    2016-07-01

    Ultrasmall biocompatible WO3 - x nanodots with an outstanding X-ray radiation sensitization effect are prepared, and demonstrated to be applicable for multi-modality tumor imaging through computed tomography and photoacoustic imaging (PAI), and effective cancer treatment combining both photothermal therapy and radiation therapy. PMID:27136070

  4. Measurement of photosynthetic response to plant water stress using a multi-modal sensing system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant yield and productivity are significantly affected by abiotic stresses such as water or nutrient deficiency. An automated, timely detection of plant stress can mitigate stress development, thereby maximizing productivity and fruit quality. A multi-modal sensing system was developed and evalua...

  5. (In)Flexibility of Constituency in Japanese in Multi-Modal Categorial Grammar with Structured Phonology

    ERIC Educational Resources Information Center

    Kubota, Yusuke

    2010-01-01

    This dissertation proposes a theory of categorial grammar called Multi-Modal Categorial Grammar with Structured Phonology. The central feature that distinguishes this theory from the majority of contemporary syntactic theories is that it decouples (without completely segregating) two aspects of syntax--hierarchical organization (reflecting…

  6. Information content and analysis methods for Multi-Modal High-Throughput Biomedical Data

    NASA Astrophysics Data System (ADS)

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R.; Picone, Marco; Poli, Tito; Aliferis, Constantin F.; Statnikov, Alexander

    2014-03-01

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect ``multi-modal'' data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data.

  7. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    ERIC Educational Resources Information Center

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  8. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information. PMID:26978821

  9. A modular cross-platform GPU-based approach for flexible 3D video playback

    NASA Astrophysics Data System (ADS)

    Olsson, Roger; Andersson, Håkan; Sjöström, Mårten

    2011-03-01

    Different compression formats for stereo and multiview based 3D video is being standardized and software players capable of decoding and presenting these formats onto different display types is a vital part in the commercialization and evolution of 3D video. However, the number of publicly available software video players capable of decoding and playing multiview 3D video is still quite limited. This paper describes the design and implementation of a GPU-based real-time 3D video playback solution, built on top of cross-platform, open source libraries for video decoding and hardware accelerated graphics. A software architecture is presented that efficiently process and presents high definition 3D video in real-time and in a flexible manner support both current 3D video formats and emerging standards. Moreover, a set of bottlenecks in the processing of 3D video content in a GPU-based real-time 3D video playback solution is identified and discussed.

  10. Use of Multi-Modal Media and Tools in an Online Information Literacy Course: College Students' Attitudes and Perceptions

    ERIC Educational Resources Information Center

    Chen, Hsin-Liang; Williams, James Patrick

    2009-01-01

    This project studies the use of multi-modal media objects in an online information literacy class. One hundred sixty-two undergraduate students answered seven surveys. Significant relationships are found among computer skills, teaching materials, communication tools and learning experience. Multi-modal media objects and communication tools are…

  11. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  12. Development of cross-platform computer-based tutorials.

    PubMed

    Cooper, J A; McCandless, B K

    1996-11-01

    The development, distribution, and support of computer-based instruction in radiology is complicated by the fact that many radiology departments use computers with different operating systems: Macintosh and Windows. A program for developing cross-platform on-line documentation was adapted to develop a graphical hypertext tutorial that would run identically on both types of computers. A tutorial for interpreting ventilation-perfusion scans was created that would run on the Windows platform. The graphics were converted to Macintosh format, and the identical source information was recompiled to run on the Macintosh platform. It was found that the tutorial, with its hypertext, full-color graphics, graphical links, searching, user annotation, and bookmarks, could be displayed and operated identically between platforms. Cross-platform tutorials must be developed on a Windows-based computer but require only one source file for both Windows-based and Macintosh computers. These tutorials can be distributed free of charge, and minimal training is required for those who already know how to use Windows on-line help. PMID:8946549

  13. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  14. Niching Methods: Speciation Theory Applied for Multi-modal Function Optimization

    NASA Astrophysics Data System (ADS)

    Shir, Ofer M.; Bäck, Thomas

    While contemporary Evolutionary Algorithms (EAs) excel in various types of optimizations, their generalization to speciational subpopulations is much needed upon their deployment to multi-modal landscapes, mainly due to the typical loss of population diversity. The resulting techniques, known as niching methods, are the main focus of this chapter, which will provide the motivation, pose the problem both from the biological as well as computational perspectives, and describe algorithmic solutions. Biologically inspired by organic speciation processes, and armed with real-world incentive to obtain multiple solutions for better decision making, we shall present here the application of certain bioprocesses to multi-modal function optimization, by means of a broad overview of the existing work in the field, as well as a detailed description of specific test cases.

  15. A wireless modular multi-modal multi-node patch platform for robust biosignal monitoring.

    PubMed

    Pantelopoulos, Alexandros; Saldivar, Enrique; Roham, Masoud

    2011-01-01

    In this paper a wireless modular, multi-modal, multi-node patch platform is described. The platform comprises low-cost semi-disposable patch design aiming at unobtrusive ambulatory monitoring of multiple physiological parameters. Owing to its modular design it can be interfaced with various low-power RF communication and data storage technologies, while the data fusion of multi-modal and multi-node features facilitates measurement of several biosignals from multiple on-body locations for robust feature extraction. Preliminary results of the patch platform are presented which illustrate the capability to extract respiration rate from three different independent metrics, which combined together can give a more robust estimate of the actual respiratory rate. PMID:22255929

  16. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    SciTech Connect

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learning system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.

  17. NMRFx Processor: a cross-platform NMR data processing program.

    PubMed

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis. PMID:27457481

  18. Cross-platform hypermedia examinations on the Web.

    PubMed Central

    Williams, T. W.; Giuse, N. B.; Huber, J. T.; Janco, R. L.

    1995-01-01

    The authors developed a multiple-choice medical testing system delivered using the World Wide Web. It evolved from an older, single-platform, locally-developed computer-based examination. The old system offered a number of advantages over traditional paper-based examinations, such as digital graphics and quicker, easier scoring. The new system builds on these advantages with its true cross-platform design and the addition of hypertext learning responses. The benefits of this system will increase as more medical educational resources migrate to the Web. Faculty and student feedback has been positive. The authors encourage other institutions to experiment with Web-based teaching materials, including examinations. PMID:8563333

  19. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records

    PubMed Central

    Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176

  20. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  1. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  2. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; Nazikian, Raffi

    2016-05-01

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This work identifies the measured response fields as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. This work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.

  3. A multi-modal face recognition method using complete local derivative patterns and depth maps.

    PubMed

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  4. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    DOE PAGESBeta

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; Nazikian, Raffi

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fieldsmore » as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.« less

  5. Discriminative multi-task feature selection for multi-modality classification of Alzheimer’s disease

    PubMed Central

    Ye, Tingting; Zu, Chen; Jie, Biao

    2016-01-01

    Recently, multi-task based feature selection methods have been used in multi-modality based classification of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, in traditional multi-task feature selection methods, some useful discriminative information among subjects is usually not well mined for further improving the subsequent classification performance. Accordingly, in this paper, we propose a discriminative multitask feature selection method to select the most discriminative features for multi-modality based classification of AD/MCI. Specifically, for each modality, we train a linear regression model using the corresponding modality of data, and further enforce the group-sparsity regularization on weights of those regression models for joint selection of common features across multiple modalities. Furthermore, we propose a discriminative regularization term based on the intra-class and inter-class Laplacian matrices to better use the discriminative information among subjects. To evaluate our proposed method, we perform extensive experiments on 202 subjects, including 51 AD patients, 99 MCI patients, and 52 healthy controls (HC), from the baseline MRI and FDG-PET image data of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The experimental results show that our proposed method not only improves the classification performance, but also has potential to discover the disease-related biomarkers useful for diagnosis of disease, along with the comparison to several state-of-the-art methods for multi-modality based AD/MCI classification. PMID:26311394

  6. Information content and analysis methods for Multi-Modal High-Throughput Biomedical Data

    PubMed Central

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R.; Picone, Marco; Poli, Tito; Aliferis, Constantin F.; Statnikov, Alexander

    2014-01-01

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect “multi-modal” data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data. PMID:24651673

  7. Discriminative multi-task feature selection for multi-modality classification of Alzheimer's disease.

    PubMed

    Ye, Tingting; Zu, Chen; Jie, Biao; Shen, Dinggang; Zhang, Daoqiang

    2016-09-01

    Recently, multi-task based feature selection methods have been used in multi-modality based classification of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, in traditional multi-task feature selection methods, some useful discriminative information among subjects is usually not well mined for further improving the subsequent classification performance. Accordingly, in this paper, we propose a discriminative multi-task feature selection method to select the most discriminative features for multi-modality based classification of AD/MCI. Specifically, for each modality, we train a linear regression model using the corresponding modality of data, and further enforce the group-sparsity regularization on weights of those regression models for joint selection of common features across multiple modalities. Furthermore, we propose a discriminative regularization term based on the intra-class and inter-class Laplacian matrices to better use the discriminative information among subjects. To evaluate our proposed method, we perform extensive experiments on 202 subjects, including 51 AD patients, 99 MCI patients, and 52 healthy controls (HC), from the baseline MRI and FDG-PET image data of the Alzheimer's Disease Neuroimaging Initiative (ADNI). The experimental results show that our proposed method not only improves the classification performance, but also has potential to discover the disease-related biomarkers useful for diagnosis of disease, along with the comparison to several state-of-the-art methods for multi-modality based AD/MCI classification. PMID:26311394

  8. Multi-modal image registration based on gradient orientations of minimal uncertainty.

    PubMed

    De Nigris, Dante; Collins, D Louis; Arbel, Tal

    2012-12-01

    In this paper, we propose a new multi-scale technique for multi-modal image registration based on the alignment of selected gradient orientations of reduced uncertainty. We show how the registration robustness and accuracy can be improved by restricting the evaluation of gradient orientation alignment to locations where the uncertainty of fixed image gradient orientations is minimal, which we formally demonstrate correspond to locations of high gradient magnitude. We also embed a computationally efficient technique for estimating the gradient orientations of the transformed moving image (rather than resampling pixel intensities and recomputing image gradients). We have applied our method to different rigid multi-modal registration contexts. Our approach outperforms mutual information and other competing metrics in the context of rigid multi-modal brain registration, where we show sub-millimeter accuracy with cases obtained from the retrospective image registration evaluation project. Furthermore, our approach shows significant improvements over standard methods in the highly challenging clinical context of image guided neurosurgery, where we demonstrate misregistration of less than 2 mm with relation to expert selected landmarks for the registration of pre-operative brain magnetic resonance images to intra-operative ultrasound images. PMID:22987509

  9. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.

    2013-12-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed

  10. Query Health: standards-based, cross-platform population health surveillance

    PubMed Central

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371

  11. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Arheimer, Berit; Pers, Charlotta; Isberg, Kristina

    2013-04-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model (Lindström et al., 2010). It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. In Sweden, the model is used by water authorities to fulfil the Water Framework Directive and the Marine Strategy Framework Directive. It is used for characterization, forecasts, and scenario analyses. Model data can be downloaded for free from three different HYPE applications: Europe (www.smhi.se/e-hype), Baltic Sea basin (www.smhi.se/balt-hype), and Sweden (vattenweb.smhi.se) The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modelling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code will be delivered frequently. The main objective of the HYPE OSC is to provide public access to a state-of-the-art operational hydrological model and to encourage hydrologic expertise from different parts of the world to contribute to model improvement. HYPE OSC is open to everyone interested in hydrology, hydrological modelling and code development - e.g. scientists, authorities, and consultancies. The HYPE Open Source Community was initiated in November 2011 by a kick-off and workshop with 50 eager participants

  12. Free for All: Open Source Software

    ERIC Educational Resources Information Center

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  13. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  14. PLplot: Cross-platform Software Package for Scientific Plots

    NASA Astrophysics Data System (ADS)

    Many Developers

    2011-06-01

    PLplot is a cross-platform software package for creating scientific plots. To help accomplish that task it is organized as a core C library, language bindings for that library, and device drivers which control how the plots are presented in non-interactive and interactive plotting contexts. The PLplot core library can be used to create standard x-y plots, semi-log plots, log-log plots, contour plots, 3D surface plots, mesh plots, bar charts and pie charts. Multiple graphs (of the same or different sizes) may be placed on a single page, and multiple pages are allowed for those device formats that support them. PLplot has core support for Unicode. This means for our many Unicode-aware devices that plots can be labelled using the enormous selection of Unicode mathematical symbols. A large subset of our Unicode-aware devices also support complex text layout (CTL) languages such as Arabic, Hebrew, and Indic and Indic-derived CTL scripts such as Devanagari, Thai, Lao, and Tibetan. PLplot device drivers support a number of different file formats for non-interactive plotting and a number of different platforms that are suitable for interactive plotting. It is easy to add new device drivers to PLplot by writing a small number of device dependent routines.

  15. Developing a cross-platform port simulation system.

    SciTech Connect

    Nevins, M. R.

    1999-07-08

    With the advent of networked computer systems that connect disparate computer hardware and operating systems, it is important for port simulation systems to be able to run on a wide variety of computer platforms. This paper describes the design and implementation issues in reengineering the PORTSIM model in order to field the model to Windows-based systems as well as to Unix-based systems such as the Sun, Silicon Graphics, and HP workstations. The existing PORTSIM model was written to run on a Sun workstation running Unix. The model was initially implemented in MODSIM and C and utilized embedded SQL to retrieve port, ship, and cargo data from back-end OMCLE databases. Output reports, graphs, and tables for model results were written in C, utilizing third-party graphics libraries. This design and implementation worked well for the intended hardware platform and configuration, but as the number of model users grew and as the capabilities of the model expanded, a need developed to field the model to varying hardware configurations. This new requirement demanded that the existing design be modified to more easily allow for model fielding and maintenance. A phased approach is described that (1) identifies the existing model from which cross-platform development began, (2) delineates an intermediate client-server model that has been developed utilizing Java to allow for greater flexibility and ease in distributing and fielding the model, and (3) describes the final goals to be achieved in this development process.

  16. Open Source 2010: Reflections on 2007

    ERIC Educational Resources Information Center

    Wheeler, Brad

    2007-01-01

    Colleges and universities and commercial firms have demonstrated great progress in realizing the vision proffered for "Open Source 2007," and 2010 will mark even greater progress. Although much work remains in refining open source for higher education applications, the signals are now clear: the collaborative development of software can provide…

  17. Open Source, Openness, and Higher Education

    ERIC Educational Resources Information Center

    Wiley, David

    2006-01-01

    In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…

  18. 7 Questions to Ask Open Source Vendors

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  19. Defining an Open Source Strategy for NASA

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Crichton, D. J.; Lindsay, F.; Berrick, S. W.; Marshall, J. J.; Downs, R. R.

    2011-12-01

    Over the course of the past year, we have worked to help frame a strategy for NASA and open source software. This includes defining information processes to understand open source licensing, attribution, commerciality, redistribution, communities, architectures, and interactions within the agency. Specifically we held a training session at the NASA Earth Science Data Systems Working Group meeting in Open Source software as it relates to the NASA Earth Science data systems enterprise, including EOSDIS, the Distributed Active Archive Centers (DAACs), ACCESS proposals, and the MEASURES communities, and efforts to understand how open source software can be both consumed and produced within that ecosystem. In addition, we presented at the 1st NASA Open Source Summit (OSS) and helped to define an agency-level strategy, a set of recommendations and paths forward for how to identify healthy open source communities, how to deal with issues such as contributions originating from other agencies, and how to search out talent with the right skills to develop software for NASA in the modern age. This talk will review our current recommendations for open source at NASA, and will cover the set of thirteen recommendations output from the NASA Open Source Summit and discuss some of their implications for the agency.

  20. A graph-based approach for the retrieval of multi-modality medical images.

    PubMed

    Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan

    2014-02-01

    In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state

  1. Multi-modal characterization of nanogram amounts of a photosensitive polymer

    NASA Astrophysics Data System (ADS)

    Kim, Seonghwan; Lee, Dongkyu; Yun, Minhyuk; Jung, Namchul; Jeon, Sangmin; Thundat, Thomas

    2013-01-01

    Here, we demonstrate multi-modal approach of simultaneous characterization of poly(vinyl cinnamate) (PVCN) using a microcantilever sensor. We integrate nanomechanical thermal analysis with photothermal cantilever deflection spectroscopy for discerning ultraviolet (UV) exposure-induced variations in the thermodynamic and thermomechanical properties of the PVCN as a function of temperature and UV irradiation time. UV radiation-induced photo-cross-linking processes in the PVCN are verified with the increase of the Young's modulus and cantilever deflection as well as the decrease in the hysteresis of deflection and the intensity of C=C peak in the nanomechanical infrared spectrum as a function of UV irradiation time.

  2. Nonlinear dynamics of magnetically coupled beams for multi-modal vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Abed, I.; Kacem, N.; Bouhaddi, N.; Bouazizi, M. L.

    2016-04-01

    We investigate the nonlinear dynamics of magnetically coupled beams for multi-modal vibration energy harvesting. A multi-physics model for the proposed device is developed taking into account geometric and magnetic nonlinearities. The coupled nonlinear equations of motion are solved using the Galerkin discretization coupled with the harmonic balance method and the asymptotic numerical method. Several numerical simulations have been performed showing that the expected performances of the proposed vibration energy harvester are significantly promising with up to 130 % in term of bandwidth and up to 60 μWcm-3g-2 in term of normalized harvested power.

  3. Multi-Modal Imaging with a Toolbox of Influenza A Reporter Viruses

    PubMed Central

    Tran, Vy; Poole, Daniel S.; Jeffery, Justin J.; Sheahan, Timothy P.; Creech, Donald; Yevtodiyenko, Aleksey; Peat, Andrew J.; Francis, Kevin P.; You, Shihyun; Mehle, Andrew

    2015-01-01

    Reporter viruses are useful probes for studying multiple stages of the viral life cycle. Here we describe an expanded toolbox of fluorescent and bioluminescent influenza A reporter viruses. The enhanced utility of these tools enabled kinetic studies of viral attachment, infection, and co-infection. Multi-modal bioluminescence and positron emission tomography–computed tomography (PET/CT) imaging of infected animals revealed that antiviral treatment reduced viral load, dissemination, and inflammation. These new technologies and applications will dramatically accelerate in vitro and in vivo influenza virus studies. PMID:26473913

  4. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  5. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  6. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  7. An Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup

    PubMed Central

    Miron, Alina; Rogozan, Alexandrina; Ainouz, Samia; Bensrhair, Abdelaziz; Broggi, Alberto

    2015-01-01

    The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion. PMID:26076403

  8. Multi-Modal Use of a Socially Directed Call in Bonobos

    PubMed Central

    Genty, Emilie; Clay, Zanna; Hobaiter, Catherine; Zuberbühler, Klaus

    2014-01-01

    ‘Contest hoots’ are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus). These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, ‘contest hoots’ were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction. PMID:24454745

  9. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  10. Accuracy and reproducibility of tumor positioning during prolonged and multi-modality animal imaging studies

    NASA Astrophysics Data System (ADS)

    Zhang, Mutian; Huang, Minming; Le, Carl; Zanzonico, Pat B.; Claus, Filip; Kolbert, Katherine S.; Martin, Kyle; Ling, C. Clifton; Koutcher, Jason A.; Humm, John L.

    2008-10-01

    Dedicated small-animal imaging devices, e.g. positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI) scanners, are being increasingly used for translational molecular imaging studies. The objective of this work was to determine the positional accuracy and precision with which tumors in situ can be reliably and reproducibly imaged on dedicated small-animal imaging equipment. We designed, fabricated and tested a custom rodent cradle with a stereotactic template to facilitate registration among image sets. To quantify tumor motion during our small-animal imaging protocols, 'gold standard' multi-modality point markers were inserted into tumor masses on the hind limbs of rats. Three types of imaging examination were then performed with the animals continuously anesthetized and immobilized: (i) consecutive microPET and MR images of tumor xenografts in which the animals remained in the same scanner for 2 h duration, (ii) multi-modality imaging studies in which the animals were transported between distant imaging devices and (iii) serial microPET scans in which the animals were repositioned in the same scanner for subsequent images. Our results showed that the animal tumor moved by less than 0.2-0.3 mm over a continuous 2 h microPET or MR imaging session. The process of transporting the animal between instruments introduced additional errors of ~0.2 mm. In serial animal imaging studies, the positioning reproducibility within ~0.8 mm could be obtained.

  11. In vivo monitoring of structural and mechanical changes of tissue scaffolds by multi-modality imaging

    PubMed Central

    Park, Dae Woo; Ye, Sang-Ho; Jiang, Hong Bin; Dutta, Debaditya; Nonaka, Kazuhiro; Wagner, William R.; Kim, Kang

    2014-01-01

    Degradable tissue scaffolds are implanted to serve a mechanical role while healing processes occur and putatively assume the physiological load as the scaffold degrades. Mechanical failure during this period can be unpredictable as monitoring of structural degradation and mechanical strength changes at the implant site is not readily achieved in vivo, and non-invasively. To address this need, a multi-modality approach using ultrasound shear wave imaging (USWI) and photoacoustic imaging (PAI) for both mechanical and structural assessment in vivo was demonstrated with degradable poly(ester urethane)urea (PEUU) and polydioxanone (PDO) scaffolds. The fibrous scaffolds were fabricated with wet electrospinning, dyed with indocyanine green (ICG) for optical contrast in PAI, and implanted in the abdominal wall of 36 rats. The scaffolds were monitored monthly using USWI and PAI and were extracted at 0, 4, 8 and 12 wk for mechanical and histological assessment. The change in shear modulus of the constructs in vivo obtained by USWI correlated with the change in average Young's modulus of the constructs ex vivo obtained by compression measurements. The PEUU and PDO scaffolds exhibited distinctly different degradation rates and average PAI signal intensity. The distribution of PAI signal intensity also corresponded well to the remaining scaffolds as seen in explant histology. This evidence using a small animal abdominal wall repair model demonstrates that multi-modality imaging of USWI and PAI may allow tissue engineers to noninvasively evaluate concurrent mechanical stiffness and structural changes of tissue constructs in vivo for a variety of applications. PMID:24951048

  12. A multi-modal approach to assessing recovery in youth athletes following concussion.

    PubMed

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-01-01

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one's participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community. PMID:25285728

  13. MINERVA: a multi-modality plugin-based radiation therapy treatment planning system.

    PubMed

    Wemple, C A; Wessol, D E; Nigg, D W; Cogliati, J J; Milvich, M; Fredrickson, C M; Perkins, M; Harkin, G J; Hartmann-Siantar, C L; Lehmann, J; Flickinger, T; Pletcher, D; Yuan, A; DeNardo, G L

    2005-01-01

    Researchers at the INEEL, MSU, LLNL and UCD have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system, which can be used for planning and analysing several radiotherapy modalities, either singly or combined, using common treatment planning tools. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The design also facilitates the future integration of improved planning technologies. The code is being developed with the Java programming language for interoperability. The MINERVA design includes the image processing, model definition and data analysis modules with a central module to coordinate communication and data transfer. Dose calculation is performed by source and transport plugin modules, which communicate either directly through the database or through MINERVA's openly published, extensible markup language (XML)-based application programmer's interface (API). All internal data are managed by a database management system and can be exported to other applications or new installations through the API data formats. A full computation path has been established for molecular-targeted radiotherapy treatment planning, with additional treatment modalities presently under development. PMID:16604627

  14. Multi-modal signal acquisition using a synchronized wireless body sensor network in geriatric patients.

    PubMed

    Pflugradt, Maik; Mann, Steffen; Tigges, Timo; Görnig, Matthias; Orglmeister, Reinhold

    2016-02-01

    Wearable home-monitoring devices acquiring various biosignals such as the electrocardiogram, photoplethysmogram, electromyogram, respirational activity and movements have become popular in many fields of research, medical diagnostics and commercial applications. Especially ambulatory settings introduce still unsolved challenges to the development of sensor hardware and smart signal processing approaches. This work gives a detailed insight into a novel wireless body sensor network and addresses critical aspects such as signal quality, synchronicity among multiple devices as well as the system's overall capabilities and limitations in cardiovascular monitoring. An early sign of typical cardiovascular diseases is often shown by disturbed autonomic regulations such as orthostatic intolerance. In that context, blood pressure measurements play an important role to observe abnormalities like hypo- or hypertensions. Non-invasive and unobtrusive blood pressure monitoring still poses a significant challenge, promoting alternative approaches including pulse wave velocity considerations. In the scope of this work, the presented hardware is applied to demonstrate the continuous extraction of multi modal parameters like pulse arrival time within a preliminary clinical study. A Schellong test to diagnose orthostatic hypotension which is typically based on blood pressure cuff measurements has been conducted, serving as an application that might significantly benefit from novel multi-modal measurement principles. It is further shown that the system's synchronicity is as precise as 30 μs and that the integrated analog preprocessing circuits and additional accelerometer data provide significant advantages in ambulatory measurement environments. PMID:26479338

  15. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage. PMID:26672045

  16. Entropy and Laplacian images: structural representations for multi-modal registration.

    PubMed

    Wachinger, Christian; Navab, Nassir

    2012-01-01

    The standard approach to multi-modal registration is to apply sophisticated similarity metrics such as mutual information. The disadvantage of these metrics, in comparison to measuring the intensity difference with, e.g. L1 or L2 distance, is the increase in computational complexity and consequently the increase in runtime of the registration. An alternative approach, which has not yet gained much attention in the literature, is to find image representations, so called structural representations, that allow for the application of the L1 and L2 distance for multi-modal images. This has not only the advantage of a faster similarity calculation but enables also the application of more sophisticated optimization strategies. In this article, we theoretically analyze the requirements for structural representations. Further, we introduce two approaches to create such representations, which are based on the calculation of patch entropy and manifold learning, respectively. While the application of entropy has practical advantages in terms of computational complexity, the usage of manifold learning has theoretical advantages, by presenting an optimal approximation to one of the theoretical requirements. We perform experiments on multiple datasets for rigid, deformable, and groupwise registration with good results with respect to both, runtime and quality of alignment. PMID:21632274

  17. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    PubMed Central

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839

  18. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  19. Multi-modal contributions to detoxification of acute pharmacotoxicity by a triglyceride micro-emulsion

    PubMed Central

    Fettiplace, Michael R; Lis, Kinga; Ripper, Richard; Kowal, Katarzyna; Pichurko, Adrian; Vitello, Dominic; Rubinstein, Israel; Schwartz, David; Akpa, Belinda S; Weinberg, Guy

    2014-01-01

    Triglyceride micro-emulsions such as Intralipid® have been used to reverse cardiac toxicity induced by a number of drugs but reservations about their broad-spectrum applicability remain because of the poorly understood mechanism of action. Herein we report an integrated mechanism of reversal of bupivacaine toxicity that includes both transient drug scavenging and a cardiotonic effect that couple to accelerate movement of the toxin away from sites of toxicity. We thus propose a multi-modal therapeutic paradigm for colloidal bio-detoxification whereby a micro-emulsion both improves cardiac output and rapidly ferries the drug away from organs subject to toxicity. In vivo and in silico models of toxicity were combined to test the contribution of individual mechanisms and reveal the multi-modal role played by the cardiotonic and scavenging actions of the triglyceride suspension. These results suggest a method to predict which drug toxicities are most amenable to treatment and inform the design of next-generation therapeutics for drug overdose. PMID:25483426

  20. Feasibility and Initial Performance of Simultaneous SPECT-CT Imaging Using a Commercial Multi-Modality Preclinical Imaging System

    PubMed Central

    Osborne, Dustin R.; Austin, Derek W.

    2015-01-01

    Multi-modality imaging provides coregistered PET-CT and SPECT-CT images; however such multi-modality workflows usually consist of sequential scans from the individual imaging components for each modality. This typical workflow may result in long scan times limiting throughput of the imaging system. Conversely, acquiring multi-modality data simultaneously may improve correlation and registration of images, improve temporal alignment of the acquired data, increase imaging throughput, and benefit the scanned subject by minimizing time under anesthetic. In this work, we demonstrate the feasibility and procedure for modifying a commercially available preclinical SPECT-CT platform to enable simultaneous SPECT-CT acquisition. We also evaluate the performance of simultaneous SPECT-CT tomographic imaging with this modified system. Performance was accessed using a 57Co source and image quality was evaluated with 99mTc phantoms in a series of simultaneous SPECT-CT scans. PMID:26146568

  1. Automatic quantification of multi-modal rigid registration accuracy using feature detectors.

    PubMed

    Hauler, F; Furtado, H; Jurisic, M; Polanec, S H; Spick, C; Laprie, A; Nestle, U; Sabatini, U; Birkfellner, W

    2016-07-21

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error-mTRE-based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  2. Automatic quantification of multi-modal rigid registration accuracy using feature detectors

    NASA Astrophysics Data System (ADS)

    Hauler, F.; Furtado, H.; Jurisic, M.; Polanec, S. H.; Spick, C.; Laprie, A.; Nestle, U.; Sabatini, U.; Birkfellner, W.

    2016-07-01

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error—mTRE—based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  3. A Cross-platform Toolkit for Mass Spectrometry and Proteomics

    PubMed Central

    Chambers, Matthew C.; Maclean, Brendan; Burke, Robert; Amodei, Dario; Ruderman, Daniel L; Neumann, Steffen; Gatto, Laurent; Fischer, Bernd; Pratt, Brian; Egertson, Jarrett; Hoff, Katherine; Kessner, Darren; Tasman, Natalie; Shulman, Nicholas; Frewen, Barbara; Baker, Tahmina A.; Brusniak, Mi-Youn; Paulse, Christopher; Creasy, David; Flashner, Lisa; Kani, Kian; Moulding, Chris; Seymour, Sean L.; Nuwaysir, Lydia M.; Lefebvre, Brent; Kuhlmann, Frank; Roark, Joe; Rainer, Paape; Detlev, Suckau; Hemenway, Tina; Huhmer, Andreas; Langridge, James; Connolly, Brian; Chadick, Trey; Holly, Krisztina; Eckels, Josh; Deutsch, Eric W.; Moritz, Robert L; Katz, Jonathan E.; Agus, David B.; MacCoss, Michael; Tabb, David L.; Mallick, Parag

    2012-01-01

    Mass-spectrometry-based proteomics has become an important component of biological research. Numerous proteomics methods have been developed to identify and quantify the proteins in biological and clinical samples1, identify pathways affected by endogenous and exogenous perturbations2, and characterize protein complexes3. Despite successes, the interpretation of vast proteomics datasets remains a challenge. There have been several calls for improvements and standardization of proteomics data analysis frameworks, as well as for an application-programming interface for proteomics data access4,5. In response, we have developed the ProteoWizard Toolkit, a robust set of open-source, software libraries and applications designed to facilitate proteomics research. The libraries implement the first-ever, non-commercial, unified data access interface for proteomics, bridging field-standard open formats and all common vendor formats. In addition, diverse software classes enable rapid development of vendor-agnostic proteomics software. Additionally, ProteoWizard projects and applications, building upon the core libraries, are becoming standard tools for enabling significant proteomics inquiries. PMID:23051804

  4. An open-source framework for testing tracking devices using Lego Mindstorms

    NASA Astrophysics Data System (ADS)

    Jomier, Julien; Ibanez, Luis; Enquobahrie, Andinet; Pace, Danielle; Cleary, Kevin

    2009-02-01

    In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.

  5. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    PubMed Central

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446

  6. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  7. The evolution of gadolinium based contrast agents: from single-modality to multi-modality.

    PubMed

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K

    2016-05-19

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications. PMID:27159645

  8. Multi-modal vibration energy harvesting approach based on nonlinear oscillator arrays under magnetic levitation

    NASA Astrophysics Data System (ADS)

    Abed, I.; Kacem, N.; Bouhaddi, N.; Bouazizi, M. L.

    2016-02-01

    We propose a multi-modal vibration energy harvesting approach based on arrays of coupled levitated magnets. The equations of motion which include the magnetic nonlinearity and the electromagnetic damping are solved using the harmonic balance method coupled with the asymptotic numerical method. A multi-objective optimization procedure is introduced and performed using a non-dominated sorting genetic algorithm for the cases of small magnet arrays in order to select the optimal solutions in term of performances by bringing the eigenmodes close to each other in terms of frequencies and amplitudes. Thanks to the nonlinear coupling and the modal interactions even for only three coupled magnets, the proposed method enable harvesting the vibration energy in the operating frequency range of 4.6-14.5 Hz, with a bandwidth of 190% and a normalized power of 20.2 {mW} {{cm}}-3 {{{g}}}-2.

  9. The power of correlative microscopy: multi-modal, multi-scale, multi-dimensional.

    PubMed

    Caplan, Jeffrey; Niethammer, Marc; Taylor, Russell M; Czymmek, Kirk J

    2011-10-01

    Correlative microscopy is a sophisticated approach that combines the capabilities of typically separate, but powerful microscopy platforms: often including, but not limited, to conventional light, confocal and super-resolution microscopy, atomic force microscopy, transmission and scanning electron microscopy, magnetic resonance imaging and micro/nano CT (computed tomography). When targeting rare or specific events within large populations or tissues, correlative microscopy is increasingly being recognized as the method of choice. Furthermore, this multi-modal assimilation of technologies provides complementary and often unique information, such as internal and external spatial, structural, biochemical and biophysical details from the same targeted sample. The development of a continuous stream of cutting-edge applications, probes, preparation methodologies, hardware and software developments will enable realization of the full potential of correlative microscopy. PMID:21782417

  10. Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Zhang, Minliang; Chen, Qian

    2016-05-01

    We demonstrate a simple and cost-effective programmable aperture microscope to realize multi-modal computational imaging by integrating a programmable liquid crystal display (LCD) into a conventional wide-field microscope. The LCD selectively modulates the light distribution at the rear aperture of the microscope objective, allowing numerous imaging modalities, such as bright field, dark field, differential phase contrast, quantitative phase imaging, multi-perspective imaging, and full resolution light field imaging to be achieved and switched rapidly in the same setup, without requiring specialized hardwares and any moving parts. We experimentally demonstrate the success of our method by imaging unstained cheek cells, profiling microlens array, and changing perspective views of thick biological specimens. The post-exposure refocusing of a butterfly mouthpart and RFP-labeled dicot stem cross-section is also presented to demonstrate the full resolution light field imaging capability of our system for both translucent and fluorescent specimens.

  11. Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome

    PubMed Central

    Choudhry, Netan; Rao, Rajesh C.

    2015-01-01

    Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849

  12. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  13. Multi-Modality fiducial marker for validation of registration of medical images with histology

    NASA Astrophysics Data System (ADS)

    Shojaii, Rushin; Martel, Anne L.

    2010-03-01

    A multi-modality fiducial marker is presented in this work, which can be used for validating the correlation of histology images with medical images. This marker can also be used for landmark-based image registration. Seven different fiducial markers including a catheter, spaghetti, black spaghetti, cuttlefish ink, and liquid iron are implanted in a mouse specimen and then investigated based on visibility, localization, size, and stability. The black spaghetti and the mixture of cuttlefish ink and flour are shown to be the most suitable markers. Based on the size of the markers, black spaghetti is more suitable for big specimens and the mixture of the cuttlefish ink, flour, and water injected in a catheter is more suitable for small specimens such as mouse tumours. These markers are visible on medical images and also detectable on histology and optical images of the tissue blocks. The main component in these agents which enhances the contrast is iron.

  14. Multi-modality Optical Imaging of Rat Kidney Dysfunction: In Vivo Response to Various Ischemia Times.

    PubMed

    Ding, Zhenyang; Jin, Lily; Wang, Hsing-Wen; Tang, Qinggong; Guo, Hengchang; Chen, Yu

    2016-01-01

    We observed in vivo kidney dysfunction with various ischemia times at 30, 75, 90, and 120 min using multi-modality optical imaging: optical coherence tomography (OCT), Doppler OCT (DOCT), and two-photon microscopy (TPM). We imaged the renal tubule lumens and glomerulus at several areas of each kidney before, during, and after ischemia of 5-month-old female Munich-Wistar rats. For animals with 30 and 75 min ischemia times, we observed that all areas were recovered after ischemia, that tubule lumens were re-opened and the blood flow of the glomerulus was re-established. For animals with 90 and 120 min ischemia times, we observed unrecovered areas, and that tubule lumens remained close after ischemia. TPM imaging verified the results of OCT and provided higher resolution images than OCT to visualize renal tubule lumens and glomerulus blood flow at the cellular level. PMID:27526162

  15. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  16. A Distance Measure Comparison to Improve Crowding in Multi-Modal Problems.

    SciTech Connect

    D. Todd VOllmer; Terence Soule; Milos Manic

    2010-08-01

    Solving multi-modal optimization problems are of interest to researchers solving real world problems in areas such as control systems and power engineering tasks. Extensions of simple Genetic Algorithms, particularly types of crowding, have been developed to help solve these types of problems. This paper examines the performance of two distance measures, Mahalanobis and Euclidean, exercised in the processing of two different crowding type implementations against five minimization functions. Within the context of the experiments, empirical evidence shows that the statistical based Mahalanobis distance measure when used in Deterministic Crowding produces equivalent results to a Euclidean measure. In the case of Restricted Tournament selection, use of Mahalanobis found on average 40% more of the global optimum, maintained a 35% higher peak count and produced an average final best fitness value that is 3 times better.

  17. Achromatic approach to phase-based multi-modal imaging with conventional X-ray sources.

    PubMed

    Endrizzi, Marco; Vittoria, Fabio A; Kallon, Gibril; Basta, Dario; Diemoz, Paul C; Vincenzi, Alessandro; Delogu, Pasquale; Bellazzini, Ronaldo; Olivo, Alessandro

    2015-06-15

    Compatibility with polychromatic radiation is an important requirement for an imaging system using conventional rotating anode X-ray sources. With a commercially available energy-resolving single-photon-counting detector we investigated how broadband radiation affects the performance of a multi-modal edge-illumination phase-contrast imaging system. The effect of X-ray energy on phase retrieval is presented, and the achromaticity of the method is experimentally demonstrated. Comparison with simulated measurements integrating over the energy spectrum shows that there is no significant loss of image quality due to the use of polychromatic radiation. This means that, to a good approximation, the imaging system exploits radiation in the same way at all energies typically used in hard-X-ray imaging. PMID:26193618

  18. Dynamic Graph Analytic Framework (DYGRAF): greater situation awareness through layered multi-modal network analysis

    NASA Astrophysics Data System (ADS)

    Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.

    2012-06-01

    Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.

  19. Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.

    2014-03-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.

  20. The Power of Correlative Microscopy: Multi-modal, Multi-scale, Multi-dimensional

    PubMed Central

    Caplan, Jeffrey; Niethammer, Marc; Taylor, Russell M.; Czymmek, Kirk J.

    2011-01-01

    Correlative microscopy is a sophisticated approach that combines the capabilities of typically separate, but powerful microscopy platforms: often including, but not limited, to conventional light, confocal and super-resolution microscopy, atomic force microscopy, transmission and scanning electron microscopy, magnetic resonance imaging and micro/nanoCT (computed tomography). When targeting rare or specific events within large populations or tissues, correlative microscopy is increasingly being recognized as the method of choice. Furthermore, this multi-modal assimilation of technologies provides complementary and often unique information, such as internal and external spatial, structural, biochemical and biophysical details from the same targeted sample. The development of a continuous stream of cutting-edge applications, probes, preparation methodologies, hardware and software developments will enable realization of the full potential of correlative microscopy. PMID:21782417

  1. Learning by Doing: How to Develop a Cross-Platform Web App

    ERIC Educational Resources Information Center

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  2. Making Dynamic Digital Maps Cross-Platform and WWW Capable

    NASA Astrophysics Data System (ADS)

    Condit, C. D.

    2001-05-01

    High-quality color geologic maps are an invaluable information resource for educators, students and researchers. However, maps with large datasets that include images, or various types of movies, in addition to site locations where analytical data has been collected, are difficult to publish in a format that facilitates their easy access, distribution and use. The development of capable desktop computers and object oriented graphical programming environments has facilitated publication of such data sets in an encapsulated form. The original Dynamic Digital Map (DDM) programs, developed using the Macintosh based SuperCard programming environment, exemplified this approach, in which all data are included in a single package designed so that display and access to the data did not depend on proprietary programs. These DDMs were aimed for ease of use, and allowed data to be displayed by several methods, including point-and-click at icons pin-pointing sample (or image) locations on maps, and from clicklists of sample or site numbers. Each of these DDMs included an overview and automated tour explaining the content organization and program use. This SuperCard development culminated in a "DDM Template", which is a SuperCard shell into which SuperCard users could insert their own content and thus create their own DDMs, following instructions in an accompanying "DDM Cookbook" (URL http://www.geo.umass.edu/faculty/condit/condit2.html). These original SuperCard-based DDMs suffered two critical limitations: a single user platform (Macintosh) and, although they can be downloaded from the web, their use lacked an integration into the WWW. Over the last eight months I have been porting the DDM technology to MetaCard, which is aggressively cross-platform (11 UNIX dialects, WIN32 and Macintosh). The new MetaCard DDM is redesigned to make the maps and images accessible either from CD or the web, using the "LoadNGo" concept. LoadNGo allows the user to download the stand-alone DDM

  3. Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)

    SciTech Connect

    Aiken, R.J.; Carlson, R.A.; Foster, I.T.

    1997-01-01

    The research and education (R&E) community requires persistent and scaleable network infrastructure to concurrently support production and research applications as well as network research. In the past, the R&E community has relied on supporting parallel network and end-node infrastructures, which can be very expensive and inefficient for network service managers and application programmers. The grand challenge in networking is to provide support for multiple, concurrent, multi-layer views of the network for the applications and the network researchers, and to satisfy the sometimes conflicting requirements of both while ensuring one type of traffic does not adversely affect the other. Internet and telecommunications service providers will also benefit from a multi-modal infrastructure, which can provide smoother transitions to new technologies and allow for testing of these technologies with real user traffic while they are still in the pre-production mode. The authors proposed approach requires the use of as much of the same network and end system infrastructure as possible to reduce the costs needed to support both classes of activities (i.e., production and research). Breaking the infrastructure into segments and objects (e.g., routers, switches, multiplexors, circuits, paths, etc.) gives the capability to dynamically construct and configure the virtual active networks to address these requirements. These capabilities must be supported at the campus, regional, and wide-area network levels to allow for collaboration by geographically dispersed groups. The Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet) described in this report is an initial architecture and framework designed to identify and support the capabilities needed for the proposed combined infrastructure and to address related research issues.

  4. Improving protein secondary structure prediction using a multi-modal BP method.

    PubMed

    Qu, Wu; Sui, Haifeng; Yang, Bingru; Qian, Wenbin

    2011-10-01

    Methods for predicting protein secondary structures provide information that is useful both in ab initio structure prediction and as additional restraints for fold recognition algorithms. Secondary structure predictions may also be used to guide the design of site directed mutagenesis studies, and to locate potential functionally important residues. In this article, we propose a multi-modal back propagation neural network (MMBP) method for predicting protein secondary structures. Using a Knowledge Discovery Theory based on Inner Cognitive Mechanism (KDTICM) method, we have constructed a compound pyramid model (CPM), which is composed of three layers of intelligent interface that integrate multi-modal back propagation neural network (MMBP), mixed-modal SVM (MMS), modified Knowledge Discovery in Databases (KDD(⁎)) process and so on. The CPM method is both an integrated web server and a standalone application that exploits recent advancements in knowledge discovery and machine learning to perform very accurate protein secondary structure predictions. Using a non-redundant test dataset of 256 proteins from RCASP256, the CPM method achieves an average Q(3) score of 86.13% (SOV99=84.66%). Extensive testing indicates that this is significantly better than any other method currently available. Assessments using RS126 and CB513 datasets indicate that the CPM method can achieve average Q(3) score approaching 83.99% (SOV99=80.25%) and 85.58% (SOV99=81.15%). By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called CPM, which performs these secondary structure predictions, is accessible at http://kdd.ustb.edu.cn/protein_Web/. PMID:21880310

  5. Random forest-based similarity measures for multi-modal classification of Alzheimer's disease.

    PubMed

    Gray, Katherine R; Aljabar, Paul; Heckemann, Rolf A; Hammers, Alexander; Rueckert, Daniel

    2013-01-15

    Neurodegenerative disorders, such as Alzheimer's disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multi-modality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs the classification based on any individual modality for comparisons between Alzheimer's disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer's disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differs by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the future

  6. Multi-Modality Mediastinal Staging for Lung Cancer Among Medicare Beneficiaries

    PubMed Central

    Farjah, Farhood; Flum, David R.; Ramsey, Scott D.; Heagerty, Patrick J.; Symons, Rebecca Gaston; Wood, Douglas E.

    2009-01-01

    Introduction The use of non-invasive and invasive diagnostic tests improves the accuracy of mediastinal staging for lung cancer. It is unknown how frequently multi-modality mediastinal staging is used, or whether its use is associated with better health outcomes. Methods A cohort study was conducted using SEER-Medicare data (1998–2005). Patients were categorized as having undergone single (CT only), bi- (CT and PET or CT and invasive staging), or tri-modality (CT, PET, and invasive staging) staging. Results Among 43,912 subjects, 77%, 21%, and 2% received single, bi-, and tri-modality staging, respectively. The use of single modality staging decreased over time from 90% in 1998 to 67% in 2002 (p-trend <0.001), whereas the use of bi- and tri-modality staging increased from 10% to 30% and 0.4% to 5%, respectively. After adjustment for differences in patient characteristics, the use of a greater number of staging modalities was associated with a lower risk of death (bi- versus single modality: HR 0.58, 99% CI 0.56–0.60; tri- versus single modality: HR 0.49, 99% CI 0.45–0.54; tri- versus bi-modality: HR 0.85, 99% CI 0.77–0.93). These associations were maintained even after excluding stage IV patients or adjustment for stage. Conclusions The use of multi-modality mediastinal staging increased over time and was associated with better survival. Stage migration and unmeasured patient and provider characteristics may have affected the magnitude of these associations. Cancer treatment guidelines should emphasize the potential relationship between staging procedures and outcomes, and health care policy should encourage adherence to staging guidelines. PMID:19156000

  7. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    PubMed

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS

  8. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis

    PubMed Central

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS

  9. In vivo monitoring of structural and mechanical changes of tissue scaffolds by multi-modality imaging.

    PubMed

    Park, Dae Woo; Ye, Sang-Ho; Jiang, Hong Bin; Dutta, Debaditya; Nonaka, Kazuhiro; Wagner, William R; Kim, Kang

    2014-09-01

    Degradable tissue scaffolds are implanted to serve a mechanical role while healing processes occur and putatively assume the physiological load as the scaffold degrades. Mechanical failure during this period can be unpredictable as monitoring of structural degradation and mechanical strength changes at the implant site is not readily achieved in vivo, and non-invasively. To address this need, a multi-modality approach using ultrasound shear wave imaging (USWI) and photoacoustic imaging (PAI) for both mechanical and structural assessment in vivo was demonstrated with degradable poly(ester urethane)urea (PEUU) and polydioxanone (PDO) scaffolds. The fibrous scaffolds were fabricated with wet electrospinning, dyed with indocyanine green (ICG) for optical contrast in PAI, and implanted in the abdominal wall of 36 rats. The scaffolds were monitored monthly using USWI and PAI and were extracted at 0, 4, 8 and 12 wk for mechanical and histological assessment. The change in shear modulus of the constructs in vivo obtained by USWI correlated with the change in average Young's modulus of the constructs ex vivo obtained by compression measurements. The PEUU and PDO scaffolds exhibited distinctly different degradation rates and average PAI signal intensity. The distribution of PAI signal intensity also corresponded well to the remaining scaffolds as seen in explant histology. This evidence using a small animal abdominal wall repair model demonstrates that multi-modality imaging of USWI and PAI may allow tissue engineers to noninvasively evaluate concurrent mechanical stiffness and structural changes of tissue constructs in vivo for a variety of applications. PMID:24951048

  10. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  11. Diagnosis-Guided Method For Identifying Multi-Modality Neuroimaging Biomarkers Associated With Genetic Risk Factors In Alzheimer's Disease

    PubMed Central

    Hao, Xiaoke; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li

    2015-01-01

    Many recent imaging genetic studies focus on detecting the associations between genetic markers such as single nucleotide polymorphisms (SNPs) and quantitative traits (QTs). Although there exist a large number of generalized multivariate regression analysis methods, few of them have used diagnosis information in subjects to enhance the analysis performance. In addition, few of models have investigated the identification of multi-modality phenotypic patterns associated with interesting genotype groups in traditional methods. To reveal disease-relevant imaging genetic associations, we propose a novel diagnosis-guided multi-modality (DGMM) framework to discover multi-modality imaging QTs that are associated with both Alzheimer's disease (AD) and its top genetic risk factor (i.e., APOE SNP rs429358). The strength of our proposed method is that it explicitly models the priori diagnosis information among subjects in the objective function for selecting the disease-relevant and robust multi-modality QTs associated with the SNP. We evaluate our method on two modalities of imaging phenotypes, i.e., those extracted from structural magnetic resonance imaging (MRI) data and fluorodeoxyglucose positron emission tomography (FDG-PET) data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results demonstrate that our proposed method not only achieves better performances under the metrics of root mean squared error and correlation coefficient but also can identify common informative regions of interests (ROIs) across multiple modalities to guide the disease-induced biological interpretation, compared with other reference methods. PMID:26776178

  12. Rhesus macaques recognize unique multi-modal face-voice relations of familiar individuals and not of unfamiliar ones

    PubMed Central

    Habbershon, Holly M.; Ahmed, Sarah Z.; Cohen, Yale E.

    2013-01-01

    Communication signals in non-human primates are inherently multi-modal. However, for laboratory-housed monkeys, there is relatively little evidence in support of the use of multi-modal communication signals in individual recognition. Here, we used a preferential-looking paradigm to test whether laboratory-housed rhesus could “spontaneously” (i.e., in the absence of operant training) use multi-modal communication stimuli to discriminate between known conspecifics. The multi-modal stimulus was a silent movie of two monkeys vocalizing and an audio file of the vocalization from one of the monkeys in the movie. We found that the gaze patterns of those monkeys that knew the individuals in the movie were reliably biased toward the individual that did not produce the vocalization. In contrast, there was not a systematic gaze pattern for those monkeys that did not know the individuals in the movie. These data are consistent with the hypothesis that laboratory-housed rhesus can recognize and distinguish between conspecifics based on auditory and visual communication signals. PMID:23774779

  13. Hopc: a Novel Similarity Metric Based on Geometric Structural Properties for Multi-Modal Remote Sensing Image Matching

    NASA Astrophysics Data System (ADS)

    Ye, Yuanxin; Shen, Li

    2016-06-01

    Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

  14. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    ERIC Educational Resources Information Center

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  15. DIAGNOSIS-GUIDED METHOD FOR IDENTIFYING MULTI-MODALITY NEUROIMAGING BIOMARKERS ASSOCIATED WITH GENETIC RISK FACTORS IN ALZHEIMER'S DISEASE.

    PubMed

    Hao, Xiaoke; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Many recent imaging genetic studies focus on detecting the associations between genetic markers such as single nucleotide polymorphisms (SNPs) and quantitative traits (QTs). Although there exist a large number of generalized multivariate regression analysis methods, few of them have used diagnosis information in subjects to enhance the analysis performance. In addition, few of models have investigated the identification of multi-modality phenotypic patterns associated with interesting genotype groups in traditional methods. To reveal disease-relevant imaging genetic associations, we propose a novel diagnosis-guided multi-modality (DGMM) framework to discover multi-modality imaging QTs that are associated with both Alzheimer's disease (AD) and its top genetic risk factor (i.e., APOE SNP rs429358). The strength of our proposed method is that it explicitly models the priori diagnosis information among subjects in the objective function for selecting the disease-relevant and robust multi-modality QTs associated with the SNP. We evaluate our method on two modalities of imaging phenotypes, i.e., those extracted from structural magnetic resonance imaging (MRI) data and fluorodeoxyglucose positron emission tomography (FDG-PET) data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results demonstrate that our proposed method not only achieves better performances under the metrics of root mean squared error and correlation coefficient but also can identify common informative regions of interests (ROIs) across multiple modalities to guide the disease-induced biological interpretation, compared with other reference methods. PMID:26776178

  16. Providing University Education in Physical Geography across the South Pacific Islands: Multi-Modal Course Delivery and Student Grade Performance

    ERIC Educational Resources Information Center

    Terry, James P.; Poole, Brian

    2012-01-01

    Enormous distances across the vast South Pacific hinder student access to the main Fiji campus of the regional tertiary education provider, the University of the South Pacific (USP). Fortunately, USP has been a pioneer in distance education (DE) and promotes multi-modal delivery of programmes. Geography has embraced DE, but doubts remain about…

  17. OpenMS: a flexible open-source software platform for mass spectrometry data analysis.

    PubMed

    Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver

    2016-08-30

    High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease. PMID:27575624

  18. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-01

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. PMID:23305951

  19. Sensorcaching: An Open-Source platform for citizen science and environmental monitoring

    NASA Astrophysics Data System (ADS)

    O'Keefe, Michael

    Sensorcaching is an Open-Source hardware and software project designed with several goals in mind. It allows for long-term environmental monitoring with low cost and low power-usage hardware. It encourages citizens to take an active role in the health of their community by providing the means to record and explore changes in their environment. And it provides opportunities for education about the necessity and techniques of studying our planet. Sensorcaching is a 3-part project, consisting of a hardware sensor, a cross-platform mobile application, and a web platform for data aggregation. Its evolution has been driven by the desire to allow for long-term environmental monitoring by laypeople without significant capital expenditures or onerous technical burdens.

  20. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    PubMed

    Dryden, Michael D M; Wheeler, Aaron R

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100

  1. Multi-modality registration via multi-scale textural and spectral embedding representations

    NASA Astrophysics Data System (ADS)

    Li, Lin; Rusu, Mirabela; Viswanath, Satish; Penzias, Gregory; Pahwa, Shivani; Gollamudi, Jay; Madabhushi, Anant

    2016-03-01

    Intensity-based similarity measures assume that the original signal intensity of different modality images can provide statistically consistent information regarding the two modalities to be co-registered. In multi-modal registration problems, however, intensity-based similarity measures are often inadequate to identify an optimal transformation. Texture features can improve the performance of the multi-modal co-registration by providing more similar appearance representations of the two images to be co-registered, compared to the signal intensity representations. Furthermore, texture features extracted at different length scales (neighborhood sizes) can reveal similar underlying structural attributes between the images to be co-registered similarities that may not be discernible on the signal intensity representation alone. However one limitation of using texture features is that a number of them may be redundant and dependent and hence there is a need to identify non-redundant representations. Additionally it is not clear which features at which specific scales reveal similar attributes across the images to be co-registered. To address this problem, we introduced a novel approach for multimodal co-registration that employs new multi-scale image representations. Our approach comprises 4 distinct steps: (1) texure feature extraction at each length scale within both the target and template images, (2) independent component analysis (ICA) at each texture feature length scale, and (3) spectrally embedding (SE) the ICA components (ICs) obtained for the texture features at each length scale, and finally (4) identifying and combining the optimal length scales at which to perform the co-registration. To combine and co-register across different length scales, -mutual information (-MI) was applied in the high dimensional space of spectral embedding vectors to facilitate co-registration. To validate our multi-scale co-registration approach, we aligned 45 pairs of prostate

  2. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in China, such as…

  3. There's No Need to Fear Open Source

    ERIC Educational Resources Information Center

    Balas, Janet

    2005-01-01

    The last time this author wrote about open source (OS) software was in last September's 2004 issue of Computers in Libraries, which was devoted to making the most of what you have and do-it-yourself solutions. After the column appeared, she received an e-mail from David Dorman of Index Data, who believed that she had done OS products a disservice…

  4. Open Source Software and the Intellectual Commons.

    ERIC Educational Resources Information Center

    Dorman, David

    2002-01-01

    Discusses the Open Source Software method of software development and its relationship to control over information content. Topics include digital library resources; reference services; preservation; the legal and economic status of information; technical standards; access to digital data; control of information use; and copyright and patent laws.…

  5. Of Birkenstocks and Wingtips: Open Source Licenses

    ERIC Educational Resources Information Center

    Gandel, Paul B.; Wheeler, Brad

    2005-01-01

    The notion of collaborating to create open source applications for higher education is rapidly gaining momentum. From course management systems to ERP financial systems, higher education institutions are working together to explore whether they can in fact build a better mousetrap. As Lois Brooks, of Stanford University, recently observed, the…

  6. Communal Resources in Open Source Software Development

    ERIC Educational Resources Information Center

    Spaeth, Sebastian; Haefliger, Stefan; von Krogh, Georg; Renzl, Birgit

    2008-01-01

    Introduction: Virtual communities play an important role in innovation. The paper focuses on the particular form of collective action in virtual communities underlying as Open Source software development projects. Method: Building on resource mobilization theory and private-collective innovation, we propose a theory of collective action in…

  7. Implementing Rakim: Open Source Chat Reference Software

    ERIC Educational Resources Information Center

    Caraway, Shawn; Payne, Susan

    2005-01-01

    This article describes the conception, implementation, and current status of Rakim open source software at Midlands Technical college in Columbia, SC. Midlands Technical College (MTC) is a 2-year school in Columbia, S.C. It has two large campuses and three smaller campuses. Although the library functions as a single unit, there are separate…

  8. The SAMI2 Open Source Project

    NASA Astrophysics Data System (ADS)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  9. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  10. Open access and open source in chemistry

    PubMed Central

    Todd, Matthew H

    2007-01-01

    Scientific data are being generated and shared at ever-increasing rates. Two new mechanisms for doing this have developed: open access publishing and open source research. We discuss both, with recent examples, highlighting the differences between the two, and the strengths of both. PMID:17939849

  11. Strategy for analysis of flow diverting devices based on multi-modality image-based modeling

    PubMed Central

    Cebral, Juan R.; Mut, Fernando; Raschi, Marcelo; Ding, Yong-Hong; Kadirvel, Ramanathan; Kallmes, David

    2014-01-01

    Quantification and characterization of the hemodynamic environment created after flow diversion treatment of cerebral aneurysms is important to understand the effects of flow diverters and their interactions with the biology of the aneurysm wall and the thrombosis process that takes place subsequently. This paper describes the construction of multi-modality image-based subject-specific CFD models of experimentally created aneurysms in rabbits and subsequently treated with flow diverters. Briefly, anatomical models were constructed from 3D rotational angiography images, flow conditions were derived from Doppler ultrasound measurements, stent models were created and virtually deployed, and the results were compared to in vivo digital subtraction angiography and Doppler ultrasound images. The models were capable of reproducing in vivo observations, including velocity waveforms measured in the parent artery, peak velocity values measured in the aneurysm, and flow structures observed with digital subtraction angiography before and after deployment of flow diverters. The results indicate that regions of aneurysm occlusion after flow diversion coincide with slow and smooth flow patterns, while regions still permeable at the time of animal sacrifice were observed in parts of the aneurysm exposed to larger flow activity, i.e. higher velocities, more swirling and more complex flow structures. PMID:24719392

  12. Multi-scale and Multi-modal Analysis of Metamorphic Rocks Coupling Fluorescence and TXM Techniques

    NASA Astrophysics Data System (ADS)

    De Andrade, V. J. D.; Gursoy, D.; Wojcik, M.; DeCarlo, F.; Ganne, J.; Dubacq, B.

    2014-12-01

    Rocks are commonly polycrystalline systems presenting multi-scale chemical and structural heterogeneities inherited from crystallization processes or successive metamorphic events. Through different applications on metamorphic rocks involving fluorescence microprobes and full-field spectroscopy, one will illustrate how spatially resolved analytical techniques allow rock compositional variations to be related to large-scale geodynamic processes. Those examples also stress the importance of multi-modality instruments with zoom-in capability to study samples from mm to several μm large fields of view, with micrometer down to sub-100 nanometer spatial resolutions. In this perspective, imaging capabilities offered by the new ultra-bright diffraction limited synchrotron sources will be described based on experimental data. At last, the new hard X-ray Transmission X-ray Microscope (TXM) at Sector 32 of the APS at Argonne National Laboratory, performing nano computed tomography with in situ capabilities will be presented. The instrument benefit from several R&D key activities like the fabrication of new zone plates in the framework of the Multi-Bend Achromat Lattice (MBA) upgrade at APS, or the development of powerful tomography reconstruction algorithms able to operate with a limited number of projections.

  13. A method of image registration for small animal, multi-modality imaging.

    PubMed

    Chow, Patrick L; Stout, David B; Komisopoulou, Evangelia; Chatziioannou, Arion F

    2006-01-21

    Many research institutions have a full suite of preclinical tomographic scanners to answer biomedical questions in vivo. Routine multi-modality imaging requires robust registration of images generated by various tomographs. We have implemented a hardware registration method for preclinical imaging that is similar to that used in the combined positron emission tomography (PET)/computed tomography (CT) scanners in the clinic. We designed an imaging chamber which can be rigidly and reproducibly mounted on separate microPET and microCT scanners. We have also designed a three-dimensional grid phantom with 1288 lines that is used to generate the spatial transformation matrix from software registration using a 15-parameter perspective model. The imaging chamber works in combination with the registration phantom synergistically to achieve the image registration goal. We verified that the average registration error between two imaging modalities is 0.335 mm using an in vivo mouse bone scan. This paper also estimates the impact of image misalignment on PET quantitation using attenuation corrections generated from misregistered images. Our technique is expected to produce PET quantitation errors of less than 5%. The methods presented are robust and appropriate for routine use in high throughput animal imaging facilities. PMID:16394345

  14. Determining Pain Detection and Tolerance Thresholds Using an Integrated, Multi-Modal Pain Task Battery

    PubMed Central

    Hay, Justin L.; Okkerse, Pieter; van Amerongen, Guido; Groeneveld, Geert Jan

    2016-01-01

    Human pain models are useful in the assessing the analgesic effect of drugs, providing information about a drug's pharmacology and identify potentially suitable therapeutic populations. The need to use a comprehensive battery of pain models is highlighted by studies whereby only a single pain model, thought to relate to the clinical situation, demonstrates lack of efficacy. No single experimental model can mimic the complex nature of clinical pain. The integrated, multi-modal pain task battery presented here encompasses the electrical stimulation task, pressure stimulation task, cold pressor task, the UVB inflammatory model which includes a thermal task and a paradigm for inhibitory conditioned pain modulation. These human pain models have been tested for predicative validity and reliability both in their own right and in combination, and can be used repeatedly, quickly, in short succession, with minimum burden for the subject and with a modest quantity of equipment. This allows a drug to be fully characterized and profiled for analgesic effect which is especially useful for drugs with a novel or untested mechanism of action. PMID:27166581

  15. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI.

    PubMed

    Zhuang, Xiahai; Shen, Juan

    2016-07-01

    A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min. PMID:26999615

  16. Development of a new multi-modal Monte-Carlo radiotherapy planning system.

    PubMed

    Kumada, H; Nakamura, T; Komeda, M; Matsumura, A

    2009-07-01

    A new multi-modal Monte-Carlo radiotherapy planning system (developing code: JCDS-FX) is under development at Japan Atomic Energy Agency. This system builds on fundamental technologies of JCDS applied to actual boron neutron capture therapy (BNCT) trials in JRR-4. One of features of the JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multi-purpose particle Monte-Carlo transport code. Hence application of PHITS enables to evaluate total doses given to a patient by a combined modality therapy. Moreover, JCDS-FX with PHITS can be used for the study of accelerator based BNCT. To verify calculation accuracy of the JCDS-FX, dose evaluations for neutron irradiation of a cylindrical water phantom and for an actual clinical trial were performed, then the results were compared with calculations by JCDS with MCNP. The verification results demonstrated that JCDS-FX is applicable to BNCT treatment planning in practical use. PMID:19394839

  17. Nano-sensitizers for multi-modality optical diagnostic imaging and therapy of cancer

    NASA Astrophysics Data System (ADS)

    Olivo, Malini; Lucky, Sasidharan S.; Bhuvaneswari, Ramaswamy; Dendukuri, Nagamani

    2011-07-01

    We report novel bioconjugated nanosensitizers as optical and therapeutic probes for the detection, monitoring and treatment of cancer. These nanosensitisers, consisting of hypericin loaded bioconjugated gold nanoparticles, can act as tumor cell specific therapeutic photosensitizers for photodynamic therapy coupled with additional photothermal effects rendered by plasmonic heating effects of gold nanoparticles. In addition to the therapeutic effects, the nanosensitizer can be developed as optical probes for state-of-the-art multi-modality in-vivo optical imaging technology such as in-vivo 3D confocal fluorescence endomicroscopic imaging, optical coherence tomography (OCT) with improved optical contrast using nano-gold and Surface Enhanced Raman Scattering (SERS) based imaging and bio-sensing. These techniques can be used in tandem or independently as in-vivo optical biopsy techniques to specifically detect and monitor specific cancer cells in-vivo. Such novel nanosensitizer based optical biopsy imaging technique has the potential to provide an alternative to tissue biopsy and will enable clinicians to make real-time diagnosis, determine surgical margins during operative procedures and perform targeted treatment of cancers.

  18. Holographic Raman tweezers controlled by multi-modal natural user interface

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltán; Keša, Peter; Nikorovič, Matej; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Valušová, Eva; Antalík, Marián; Zemánek, Pavel

    2016-01-01

    Holographic optical tweezers provide a contactless way to trap and manipulate several microobjects independently in space using focused laser beams. Although the methods of fast and efficient generation of optical traps are well developed, their user friendly control still lags behind. Even though several attempts have appeared recently to exploit touch tablets, 2D cameras, or Kinect game consoles, they have not yet reached the level of natural human interface. Here we demonstrate a multi-modal ‘natural user interface’ approach that combines finger and gaze tracking with gesture and speech recognition. This allows us to select objects with an operator’s gaze and voice, to trap the objects and control their positions via tracking of finger movement in space and to run semi-automatic procedures such as acquisition of Raman spectra from preselected objects. This approach takes advantage of the power of human processing of images together with smooth control of human fingertips and downscales these skills to control remotely the motion of microobjects at microscale in a natural way for the human operator.

  19. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  20. Neurodegenerative Disease Diagnosis using Incomplete Multi-Modality Data via Matrix Shrinkage and Completion

    PubMed Central

    Thung, Kim-Han; Wee, Chong-Yaw; Yap, Pew-Thian; Shen, Dinggang

    2014-01-01

    In this work, we are interested in predicting the diagnostic statuses of potentially neurodegenerated patients using feature values derived from multi-modality neuroimaging data and biological data, which might be incomplete. Collecting the feature values into a matrix, with each row containing a feature vector of a sample, we propose a framework to predict the corresponding associated multiple target outputs (e.g., diagnosis label and clinical scores) from this feature matrix by performing matrix shrinkage following by matrix completion. Specifically, we first combine the feature and target output matrices into a large matrix and then partition this large incomplete matrix into smaller submatrices, each consisting of samples with complete feature values (corresponding to a certain combination of modalities) and target outputs. Treating each target output as the outcome of a prediction task, we apply a 2-step multi-task learning algorithm to select the most discriminative features and samples in each submatrix. Features and samples that are not selected in any of the submatrices are discarded, resulting in a shrunk version of the original large matrix. The missing feature values and unknown target outputs of the shrunk matrix is then completed simultaneously. Experimental results using the ADNI dataset indicate that our proposed framework achieves higher classification accuracy at a greater speed when compared with conventional imputation-based classification methods and also yields competitive performance when compared with the state-of-the-art methods. PMID:24480301

  1. Anticipation by multi-modal association through an artificial mental imagery process

    NASA Astrophysics Data System (ADS)

    Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno

    2015-01-01

    Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.

  2. Performance processes within affect-related performance zones: a multi-modal investigation of golf performance.

    PubMed

    van der Lei, Harry; Tenenbaum, Gershon

    2012-12-01

    Individual affect-related performance zones (IAPZs) method utilizing Kamata et al. (J Sport Exerc Psychol 24:189-208, 2002) probabilistic model of determining the individual zone of optimal functioning was utilized as idiosyncratic affective patterns during golf performance. To do so, three male golfers of a varsity golf team were observed during three rounds of golf competition. The investigation implemented a multi-modal assessment approach in which the probabilistic relationship between affective states and both, performance process and performance outcome, measures were determined. More specifically, introspective (i.e., verbal reports) and objective (heart rate and respiration rate) measures of arousal were incorporated to examine the relationships between arousal states and both, process components (i.e., routine consistency, timing), and outcome scores related to golf performance. Results revealed distinguishable and idiosyncratic IAPZs associated with physiological and introspective measures for each golfer. The associations between the IAPZs and decision-making or swing/stroke execution were strong and unique for each golfer. Results are elaborated using cognitive and affect-related concepts, and applications for practitioners are provided. PMID:22562463

  3. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    NASA Astrophysics Data System (ADS)

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-01

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R&D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  4. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    SciTech Connect

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-25

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  5. Multi-modal two-step floating catchment area analysis of primary health care accessibility.

    PubMed

    Langford, Mitchel; Higgs, Gary; Fry, Richard

    2016-03-01

    Two-step floating catchment area (2SFCA) techniques are popular for measuring potential geographical accessibility to health care services. This paper proposes methodological enhancements to increase the sophistication of the 2SFCA methodology by incorporating both public and private transport modes using dedicated network datasets. The proposed model yields separate accessibility scores for each modal group at each demand point to better reflect the differential accessibility levels experienced by each cohort. An empirical study of primary health care facilities in South Wales, UK, is used to illustrate the approach. Outcomes suggest the bus-riding cohort of each census tract experience much lower accessibility levels than those estimated by an undifferentiated (car-only) model. Car drivers' accessibility may also be misrepresented in an undifferentiated model because they potentially profit from the lower demand placed upon service provision points by bus riders. The ability to specify independent catchment sizes for each cohort in the multi-modal model allows aspects of preparedness to travel to be investigated. PMID:26798964

  6. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities. PMID:27563289

  7. Fusion of mass spectrometry and microscopy: a multi-modality paradigm for molecular tissue mapping

    PubMed Central

    Van de Plas, Raf; Yang, Junhai; Spraggins, Jeffrey; Caprioli, Richard M.

    2015-01-01

    A new predictive imaging modality is created through the ‘fusion’ of two distinct technologies: imaging mass spectrometry (IMS) and microscopy. IMS-generated molecular maps, rich in chemical information but having coarse spatial resolution, are combined with optical microscopy maps, which have relatively low chemical specificity but high spatial information. The resulting images combine the advantages of both technologies, enabling prediction of a molecular distribution both at high spatial resolution and with high chemical specificity. Multivariate regression is used to model variables in one technology, using variables from the other technology. Several applications demonstrate the remarkable potential of image fusion: (i) ‘sharpening’ of IMS images, which uses microscopy measurements to predict ion distributions at a spatial resolution that exceeds that of measured ion images by ten times or more; (ii) prediction of ion distributions in tissue areas that were not measured by IMS; and (iii) enrichment of biological signals and attenuation of instrumental artifacts, revealing insights that are not easily extracted from either microscopy or IMS separately. Image fusion enables a new multi-modality paradigm for tissue exploration whereby mining relationships between different imaging sensors yields novel imaging modalities that combine and surpass what can be gleaned from the individual technologies alone. PMID:25707028

  8. PET/MRI: THE NEXT GENERATION OF MULTI-MODALITY IMAGING?

    PubMed Central

    Pichler, Bernd; Wehrl, Hans F; Kolb, Armin; Judenhofer, Martin S

    2009-01-01

    Multi-modal imaging is now well-established in routine clinical practice. Especially in the field of Nuclear Medicine, new PET installations are comprised almost exclusively of combined PET/CT scanners rather than PET-only systems. However, PET/CT has certain notable shortcomings, including the inability to perform simultaneous data acquisition and the significant radiation dose to the patient contributed by CT. MRI offers, compared to CT, better contrast among soft tissues as well as functional-imaging capabilities. Therefore, the combination of PET with MRI provides many advantages which go far beyond simply combining functional PET information with structural MRI information. Many technical challenges, including possible interference between these modalities, have to be solved when combining PET and MRI and various approaches have been adapted to resolving these issues. Here we present an overview of current working prototypes of combined PET/MRI scanners from different groups. In addition, besides PET/MR images of mice, the first such images of a rat PET/MR, acquired with the first commercial clinical PET/MRI scanner, are presented. The combination of PET and MR is a promising tool in pre-clinical research and will certainly progress to clinical application. PMID:18396179

  9. Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents

    PubMed Central

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.

    2014-01-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles – Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma. PMID:24675806

  10. Tumor lysing genetically engineered T cells loaded with multi-modal imaging agents.

    PubMed

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A; Babakhani, Aydin; Ferrari, Mauro; Li, King C; Cooper, Laurence J N

    2014-01-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-(64)Cu). This can now be potentially used for (64)Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR(+)SPION(pos) T cells effectively target in vitro CD19(+) lymphoma. PMID:24675806

  11. Strategy for analysis of flow diverting devices based on multi-modality image-based modeling.

    PubMed

    Cebral, Juan R; Mut, Fernando; Raschi, Marcelo; Ding, Yong-Hong; Kadirvel, Ramanathan; Kallmes, David

    2014-10-01

    Quantification and characterization of the hemodynamic environment created after flow diversion treatment of cerebral aneurysms is important to understand the effects of flow diverters and their interactions with the biology of the aneurysm wall and the thrombosis process that takes place subsequently. This paper describes the construction of multi-modality image-based subject-specific CFD models of experimentally created aneurysms in rabbits and subsequently treated with flow diverters. Briefly, anatomical models were constructed from 3D rotational angiography images, flow conditions were derived from Doppler ultrasound measurements, stent models were created and virtually deployed, and the results were compared with in vivo digital subtraction angiography and Doppler ultrasound images. The models were capable of reproducing in vivo observations, including velocity waveforms measured in the parent artery, peak velocity values measured in the aneurysm, and flow structures observed with digital subtraction angiography before and after deployment of flow diverters. The results indicate that regions of aneurysm occlusion after flow diversion coincide with slow and smooth flow patterns, whereas regions still permeable at the time of animal sacrifice were observed in parts of the aneurysm exposed to larger flow activity, that is, higher velocities, more swirling, and more complex flow structures. PMID:24719392

  12. Imaging results of multi-modal ultrasound computerized tomography system designed for breast diagnosis.

    PubMed

    Opieliński, Krzysztof J; Pruchnicki, Piotr; Gudra, Tadeusz; Podgórski, Przemysław; Kurcz, Jacek; Kraśnicki, Tomasz; Sąsiadek, Marek; Majewski, Jarosław

    2015-12-01

    Nowadays, in the era of common computerization, transmission and reflection methods are intensively developed in addition to improving classical ultrasound methods (US) for imaging of tissue structure, in particular ultrasound transmission tomography UTT (analogous to computed tomography CT which uses X-rays) and reflection tomography URT (based on the synthetic aperture method used in radar imaging techniques). This paper presents and analyses the results of ultrasound transmission tomography imaging of the internal structure of the female breast biopsy phantom CIRS Model 052A and the results of the ultrasound reflection tomography imaging of a wire sample. Imaging was performed using a multi-modal ultrasound computerized tomography system developed with the participation of a private investor. The results were compared with the results of imaging obtained using dual energy CT, MR mammography and conventional US method. The obtained results indicate that the developed UTT and URT methods, after the acceleration of the scanning process, thus enabling in vivo examination, may be successfully used for detection and detailed characterization of breast lesions in women. PMID:25759234

  13. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  14. MINC 2.0: A Flexible Format for Multi-Modal Images

    PubMed Central

    Vincent, Robert D.; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L.; Fonov, Vladimir S.; Robbins, Steven M.; Baghdadi, Leila; Lerch, Jason; Sled, John G.; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P.; Collins, D. Louis; Evans, Alan C.

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities. PMID:27563289

  15. Multi-modal target detection for autonomous wide area search and surveillance

    NASA Astrophysics Data System (ADS)

    Breckon, Toby P.; Gaszczak, Anna; Han, Jiwan; Eichner, Marcin L.; Barnes, Stuart E.

    2013-10-01

    Generalised wide are search and surveillance is a common-place tasking for multi-sensory equipped autonomous systems. Here we present on a key supporting topic to this task - the automatic interpretation, fusion and detected target reporting from multi-modal sensor information received from multiple autonomous platforms deployed for wide-area environment search. We detail the realization of a real-time methodology for the automated detection of people and vehicles using combined visible-band (EO), thermal-band (IR) and radar sensing from a deployed network of multiple autonomous platforms (ground and aerial). This facilities real-time target detection, reported with varying levels of confidence, using information from both multiple sensors and multiple sensor platforms to provide environment-wide situational awareness. A range of automatic classification approaches are proposed, driven by underlying machine learning techniques, that facilitate the automatic detection of either target type with cross-modal target confirmation. Extended results are presented that show both the detection of people and vehicles under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance evaluation is presented at an episodic level with individual classifiers optimized for maximal each object of interest (vehicle/person) detection over a given search path/pattern of the environment, across all sensors and modalities, rather than on a per sensor sample basis. Episodic target detection, evaluated over a number of wide-area environment search and reporting tasks, generally exceeds 90%+ for the targets considered here.

  16. Microarray Meta-Analysis and Cross-Platform Normalization: Integrative Genomics for Robust Biomarker Discovery

    PubMed Central

    Walsh, Christopher J.; Hu, Pingzhao; Batt, Jane; Dos Santos, Claudia C.

    2015-01-01

    The diagnostic and prognostic potential of the vast quantity of publicly-available microarray data has driven the development of methods for integrating the data from different microarray platforms. Cross-platform integration, when appropriately implemented, has been shown to improve reproducibility and robustness of gene signature biomarkers. Microarray platform integration can be conceptually divided into approaches that perform early stage integration (cross-platform normalization) versus late stage data integration (meta-analysis). A growing number of statistical methods and associated software for platform integration are available to the user, however an understanding of their comparative performance and potential pitfalls is critical for best implementation. In this review we provide evidence-based, practical guidance to researchers performing cross-platform integration, particularly with an objective to discover biomarkers.

  17. An open-source readout for MKIDs

    NASA Astrophysics Data System (ADS)

    Duan, Ran; McHugh, Sean; Serfass, Bruno; Mazin, Benjamin A.; Merrill, A.; Golwala, Sunil R.; Downes, Thomas P.; Czakon, Nicole G.; Day, Peter K.; Gao, Jiansong; Glenn, Jason; Hollister, Matthew I.; Leduc, Henry G.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Schlaerth, James A.; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2010-07-01

    This paper will present the design, implementation, performance analysis of an open source readout system for arrays of microwave kinetic inductance detectors (MKID) for mm/submm astronomy. The readout system will perform frequency domain multiplexed real-time complex microwave transmission measurements in order to monitor the instantaneous resonance frequency and dissipation of superconducting microresonators. Each readout unit will be able to cover up to 550 MHz bandwidth and readout 256 complex frequency channels simultaneously. The digital electronics include the customized DAC, ADC, IF system and the FPGA based signal processing hardware developed by CASPER group.1-7 The entire system is open sourced, and can be customized to meet challenging requirement in many applications: e.g. MKID, MSQUID etc.

  18. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    PubMed

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). PMID:24548899

  19. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  20. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  1. Open Source Approach to Urban Growth Simulation

    NASA Astrophysics Data System (ADS)

    Petrasova, A.; Petras, V.; Van Berkel, D.; Harmon, B. A.; Mitasova, H.; Meentemeyer, R. K.

    2016-06-01

    Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban - Regional Environment Simulation) - a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.

  2. Consistency in multi-modal automated target detection using temporally filtered reporting

    NASA Astrophysics Data System (ADS)

    Breckon, Toby P.; Han, Ji W.; Richardson, Julia

    2012-09-01

    Autonomous target detection is an important goal in the wide-scale deployment of unattended sensor networks. Current approaches are often sample-centric with an emphasis on achieving maximal detection on any given isolated target signature received. This can often lead to both high false alarm rates and the frequent re-reporting of detected targets, given the required trade-off between detection sensitivity and false positive target detection. Here, by assuming that the number of samples on a true target will both be high and temporally consistent we can treat our given detection approach as a ensemble classifier distributed over time with classification from each sample, at each time-step, contributing to an overall detection threshold. Following this approach, we develop a mechanism whereby the temporal consistency of a given target must be statistically strong, over a given temporal window, for an onward detection to be reported. If the sensor sample frequency and throughput is high, relative to target motion through the field of view (e.g. 25fps camera) then we can validly set such a temporal window to a value above the occurrence level of spurious false positive detections. This approach is illustrated using the example of automated real-time vehicle and people detection, in multi-modal visible (EO) and thermal (IR) imagery, deployed on an unattended dual-sensor pod. A sensitive target detection approach, based on a codebook mapping of visual features, classifies target regions initially extracted from the scene using an adaptive background model. The use of temporal filtering provides a consistent, fused onward information feed of targets detected from either or both sensors whilst minimizing the onward transmission of false positive detections and facilitating the use of an otherwise sensitive detection approaches within the robust target reporting context of a deployed sensor network.

  3. 239PU(N, f) at Resonance Energies and its Multi-Modal Interpretation

    NASA Astrophysics Data System (ADS)

    Hambsch, F.-J.; Bax, H.; Ruskov, I.; Demattè, L.

    2003-10-01

    A measurement of fission fragment total kinetic energy (TKE) and mass yield distributions Y (A,TKE) in the 239Pu(n,f) resolved resonance region has been performed applying the twin Frisch gridded ionization chamber technique. Special emphasis was devoted to cope with the strong α-activity of this isotope by an improved pile-up rejection system. Up to about 200 eV all fission resonances could be resolved and their two-dimensional mass yield and TKE distribution, Y(A,TKE), measured. Compared to the results on 235U(n,f), much smaller fluctuations of the fission fragment mass and TKE have been observed in the case of 239Pu. From a physical point of view such fluctuations have been expected for the fission fragment properties, because the only possible lowenergy spin states (Jπ=0+,1+) belong to well separated (about 1.25 MeV) transition state bands. Hence, it was expected to observe differences in the fission fragment mass and TKE distributions between spin 0+ and 1+ resonances. However, no spin dependence and only a slight anti-correlation of the TKE with the prompt neutron multiplicity, νp. has been found in the resolved resonance energy region above 1 eV. Within the multi-modal random neck-rupture (MM-RNR) model the Y(A,TKE) distributions have been fitted assuming three fission modes, two asymmetric and one symmetric one. The branching ratio of the two asymmetric modes shows similar fluctuations as the experimental TKE. Recently, a new theoretical approach has given a solution to the absence of pronounced fluctuations of the fission properties in the case of 239Pu. Since only one transition state is involved in the fission of 0+ and 1+ resonances with a given fission fragment distribution, no fluctuations are expected.

  4. A multi-modal prostate segmentation scheme by combining spectral clustering and active shape models

    NASA Astrophysics Data System (ADS)

    Toth, Robert; Tiwari, Pallavi; Rosen, Mark; Kalyanpur, Arjun; Pungavkar, Sona; Madabhushi, Anant

    2008-03-01

    Segmentation of the prostate boundary on clinical images is useful in a large number of applications including calculating prostate volume during biopsy, tumor estimation, and treatment planning. Manual segmentation of the prostate boundary is, however, time consuming and subject to inter- and intra-reader variability. Magnetic Resonance (MR) imaging (MRI) and MR Spectroscopy (MRS) have recently emerged as promising modalities for detection of prostate cancer in vivo. In this paper we present a novel scheme for accurate and automated prostate segmentation on in vivo 1.5 Tesla multi-modal MRI studies. The segmentation algorithm comprises two steps: (1) A hierarchical unsupervised spectral clustering scheme using MRS data to isolate the region of interest (ROI) corresponding to the prostate, and (2) an Active Shape Model (ASM) segmentation scheme where the ASM is initialized within the ROI obtained in the previous step. The hierarchical MRS clustering scheme in step 1 identifies spectra corresponding to locations within the prostate in an iterative fashion by discriminating between potential prostate and non-prostate spectra in a lower dimensional embedding space. The spatial locations of the prostate spectra so identified are used as the initial ROI for the ASM. The ASM is trained by identifying user-selected landmarks on the prostate boundary on T2 MRI images. Boundary points on the prostate are identified using mutual information (MI) as opposed to the traditional Mahalanobis distance, and the trained ASM is deformed to fit the boundary points so identified. Cross validation on 150 prostate MRI slices yields an average segmentation sensitivity, specificity, overlap, and positive predictive value of 89, 86, 83, and 93% respectively. We demonstrate that the accurate initialization of the ASM via the spectral clustering scheme is necessary for automated boundary extraction. Our method is fully automated, robust to system parameters, and computationally efficient.

  5. Preliminary Evaluation of a Multi-Modal Early Intervention Program for Behaviorally Inhibited Preschoolers

    PubMed Central

    Chronis-Tuscano, Andrea; Rubin, Kenneth H.; O’Brien, Kelly A.; Coplan, Robert J.; Thomas, Sharon Renee; Dougherty, Lea R.; Cheah, Charissa S.L.; Watts, Katie; Heverly-Fitt, Sara; Huggins, Suzanne L.; Menzer, Melissa; Begle, Annie Schulz; Wimsatt, Maureen

    2015-01-01

    Objective Approximately 15–20 percent of young children can be classified as having a behaviorally inhibited (BI) temperament. Stable BI predicts the development of later anxiety disorders (particularly social anxiety), but not all inhibited children develop anxiety. Parenting characterized by inappropriate warmth/sensitivity and/or intrusive control predicts the stability of BI and moderates risk for anxiety among high-BI children. For these reasons, we developed and examined the preliminary efficacy of the Turtle Program: a multi-modal early intervention for inhibited preschool-aged children. Method Forty inhibited children between the ages of 42–60 months and their parent(s) were randomized to either the Turtle Program (n = 18) or a waitlist control condition (WLC; n = 22). Participants randomized to the Turtle Program condition received 8 weeks of concurrent parent and child group treatment. Participants were assessed at baseline and post-treatment with multi-source assessments, including parent and teacher report measures of child anxiety, diagnostic interviews, and observations of parenting behavior. Results The Turtle Program resulted in significant beneficial effects relative to the WLC condition on maternal-reported anxiety symptoms of medium to large magnitude; large effects on parent-reported BI; medium to large effects on teacher-rated school anxiety symptoms; and medium effects on observed maternal positive affect/sensitivity. Conclusions This study provides encouraging preliminary support for the Turtle Program for young behaviorally inhibited children. Importantly, the effects of the Turtle Program generalized to the school setting. Future studies should examine whether this early intervention program improves long-term developmental outcomes for these at-risk children. PMID:25798728

  6. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  7. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    NASA Astrophysics Data System (ADS)

    Paelke, V.; Nebe, K.; Geiger, C.; Klompmaker, F.; Fischer, H.

    2012-07-01

    Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks) have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to control who is

  8. Implementation of a multi-modal mobile sensor system for surface and subsurface assessment of roadways

    NASA Astrophysics Data System (ADS)

    Wang, Ming; Birken, Ralf; Shahini Shamsabadi, Salar

    2015-03-01

    There are more than 4 million miles of roads and 600,000 bridges in the United States alone. On-going investments are required to maintain the physical and operational quality of these assets to ensure public's safety and prosperity of the economy. Planning efficient maintenance and repair (M&R) operations must be armed with a meticulous pavement inspection method that is non-disruptive, is affordable and requires minimum manual effort. The Versatile Onboard Traffic Embedded Roaming Sensors (VOTERS) project developed a technology able to cost- effectively monitor the condition of roadway systems to plan for the right repairs, in the right place, at the right time. VOTERS technology consists of an affordable, lightweight package of multi-modal sensor systems including acoustic, optical, electromagnetic, and GPS sensors. Vehicles outfitted with this technology would be capable of collecting information on a variety of pavement-related characteristics at both surface and subsurface levels as they are driven. By correlating the sensors' outputs with the positioning data collected in tight time synchronization, a GIS-based control center attaches a spatial component to all the sensors' measurements and delivers multiple ratings of the pavement every meter. These spatially indexed ratings are then leveraged by VOTERS decision making modules to plan the optimum M&R operations and predict the future budget needs. In 2014, VOTERS inspection results were validated by comparing them to the outputs of recent professionally done condition surveys of a local engineering firm for 300 miles of Massachusetts roads. Success of the VOTERS project portrays rapid, intelligent, and comprehensive evaluation of tomorrow's transportation infrastructure to increase public's safety, vitalize the economy, and deter catastrophic failures.

  9. MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.

    PubMed

    Heinrich, Mattias P; Jenkinson, Mark; Bhushan, Manav; Matin, Tahreema; Gleeson, Fergus V; Brady, Sir Michael; Schnabel, Julia A

    2012-10-01

    Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. PMID:22722056

  10. Observation, Identification, and Impact of Multi-Modal Plasma Responses to Applied Magnetic Perturbations

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas

    2015-11-01

    Experiments on DIII-D have demonstrated that multiple kink modes with comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n=2, in good agreement with ideal MHD models. In contrast to a single-mode model, the structure of the response measured using poloidally distributed magnetic sensors changes when varying the applied poloidal spectrum. This is most readily evident in that different spectra of applied fields can independently excite inboard and outboard magnetic responses, which are identified as distinct plasma modes by IPEC modeling. The outboard magnetic response is correlated with the plasma pressure and consistent with the long wavelength perturbations of the least stable, pressure driven kinks calculated by DCON and used in IPEC. The models show the structure of the pressure driven modes extends throughout the bad curvature region and into the plasma core. The inboard plasma response is correlated with the edge current profile and requires the inclusion of multiple kink modes with greater stability, including opposite helicity modes, to replicate the experimental observations in the models. IPEC reveals the resulting mode structure to be highly localized in the plasma edge. Scans of the applied spectrum show this response induces the transport that influences the density pump-out, as well as the toroidal rotation drag observed in experiment and modeled using PENT. The classification of these two mode types establishes a new multi-modal paradigm for n=2 plasma response and guides the understanding needed to optimize 3D fields for independent control of stability and transport. Supported by US DOE contract DE-AC02-09CH11466.

  11. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  12. Multi-modal molecular diffuse optical tomography system for small animal imaging

    NASA Astrophysics Data System (ADS)

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-10-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near-infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to two-dimensional (2D) planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localized to within 1.5 mm for a range of target locations and depths, indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15%, which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented, demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images.

  13. Random forest-based similarity measures for multi-modal classification of Alzheimer’s disease

    PubMed Central

    Gray, Katherine R.; Aljabar, Paul; Heckemann, Rolf A.; Hammers, Alexander; Rueckert, Daniel

    2012-01-01

    Neurodegenerative disorders, such as Alzheimer’s disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multimodality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs classification based on any individual modality for comparisons between Alzheimer’s disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer’s disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differ by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the

  14. Multi-modal vibration based MEMS energy harvesters for ultra-low power wireless functional nodes

    NASA Astrophysics Data System (ADS)

    Iannacci, J.; Gottardi, M.; Serra, E.; Di Criscienzo, R.; Borrielli, A.; Bonaldi, M.

    2013-05-01

    The aim of this contribution is to report and discuss a preliminary study and rough optimization of a novel concept of MEMS device for vibration energy harvesting, based on a multi-modal dynamic behavior. The circular-shaped device features Four-Leaf Clover-like (FLC) double spring-mass cascaded systems, kept constrained to the surrounding frame by means of four straight beams. The combination of flexural bending behavior of the slender beams plus deformable parts of the petals enable to populate the desired vibration frequency range with a number of resonant modes, and improve the energy conversion capability of the micro-transducer. The harvester device, conceived for piezoelectric mechanical into electric energy conversion, is intended to sense environmental vibrations and, thereby, its geometry is optimized to have a large concentration of resonant modes in a frequency range below 5-10 kHz. The results of FEM (Finite Element Method) based analysis performed in ANSYSTM Workbench are reported, both concerning modal and harmonic response, providing important indications related to the device geometry optimization. The analysis reported in this work is limited to the sole mechanical modeling of the proposed MEMS harvester device concept. Future developments of the study will encompass the inclusion of piezoelectric conversion in the FEM simulations, in order to have indications of the actual power levels achievable with the proposed harvester concept. Furthermore, the results of the FEM studies here discussed, will be validated against experimental data, as soon as the MEMS resonator specimens, currently under fabrication, are ready for testing.

  15. Comparing uni-modal and multi-modal therapies for improving writing in acquired dysgraphia after stroke.

    PubMed

    Thiel, Lindsey; Sage, Karen; Conroy, Paul

    2016-06-01

    Writing therapy studies have been predominantly uni-modal in nature; i.e., their central therapy task has typically been either writing to dictation or copying and recalling words. There has not yet been a study that has compared the effects of a uni-modal to a multi-modal writing therapy in terms of improvements to spelling accuracy. A multiple-case study with eight participants aimed to compare the effects of a uni-modal and a multi-modal therapy on the spelling accuracy of treated and untreated target words at immediate and follow-up assessment points. A cross-over design was used and within each therapy a matched set of words was targeted. These words and a matched control set were assessed before as well as immediately after each therapy and six weeks following therapy. The two approaches did not differ in their effects on spelling accuracy of treated or untreated items or degree of maintenance. All participants made significant improvements on treated and control items; however, not all improvements were maintained at follow-up. The findings suggested that multi-modal therapy did not have an advantage over uni-modal therapy for the participants in this study. Performance differences were instead driven by participant variables. PMID:25854414

  16. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    NASA Astrophysics Data System (ADS)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  17. Classification of first-episode psychosis: a multi-modal multi-feature approach integrating structural and diffusion imaging.

    PubMed

    Peruzzo, Denis; Castellani, Umberto; Perlini, Cinzia; Bellani, Marcella; Marinelli, Veronica; Rambaldelli, Gianluca; Lasalvia, Antonio; Tosato, Sarah; De Santi, Katia; Murino, Vittorio; Ruggeri, Mirella; Brambilla, Paolo

    2015-06-01

    Currently, most of the classification studies of psychosis focused on chronic patients and employed single machine learning approaches. To overcome these limitations, we here compare, to our best knowledge for the first time, different classification methods of first-episode psychosis (FEP) using multi-modal imaging data exploited on several cortical and subcortical structures and white matter fiber bundles. 23 FEP patients and 23 age-, gender-, and race-matched healthy participants were included in the study. An innovative multivariate approach based on multiple kernel learning (MKL) methods was implemented on structural MRI and diffusion tensor imaging. MKL provides the best classification performances in comparison with the more widely used support vector machine, enabling the definition of a reliable automatic decisional system based on the integration of multi-modal imaging information. Our results show a discrimination accuracy greater than 90 % between healthy subjects and patients with FEP. Regions with an accuracy greater than 70 % on different imaging sources and measures were middle and superior frontal gyrus, parahippocampal gyrus, uncinate fascicles, and cingulum. This study shows that multivariate machine learning approaches integrating multi-modal and multisource imaging data can classify FEP patients with high accuracy. Interestingly, specific grey matter structures and white matter bundles reach high classification reliability when using different imaging modalities and indices, potentially outlining a prefronto-limbic network impaired in FEP with particular regard to the right hemisphere. PMID:25344845

  18. Open Source GIS based integrated watershed management

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address

  19. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  20. Development of a Cross-Platform Ubiquitous Language Learning Service via Mobile Phone and Interactive Television

    ERIC Educational Resources Information Center

    Fallahkhair, Sanaz; Pemberton, L.; Griffiths, R.

    2007-01-01

    This paper describes the development processes for a cross-platform ubiquitous language learning service via interactive television (iTV) and mobile phone. Adapting a learner-centred design methodology, a number of requirements were gathered from multiple sources that were subsequently used in TAMALLE (television and mobile phone assisted language…

  1. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration

    PubMed Central

    Dryden, Michael D. M.; Wheeler, Aaron R.

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100

  2. A study of clinically related open source software projects.

    PubMed

    Hogarth, Michael A; Turner, Stuart

    2005-01-01

    Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the 'fit' of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056

  3. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  4. A novel multi-modal platform to image molecular and elemental alterations in ischemic stroke.

    PubMed

    Caine, Sally; Hackett, Mark J; Hou, Huishu; Kumar, Saroj; Maley, Jason; Ivanishvili, Zurab; Suen, Brandon; Szmigielski, Aleksander; Jiang, Zhongxiang; Sylvain, Nicole J; Nichol, Helen; Kelly, Michael E

    2016-07-01

    Stroke is a major global health problem, with the prevalence and economic burden predicted to increase due to aging populations in western society. Following stroke, numerous biochemical alterations occur and damage can spread to nearby tissue. This zone of "at risk" tissue is termed the peri-infarct zone (PIZ). As the PIZ contains tissue not initially damaged by the stroke, it is considered by many as salvageable tissue. For this reason, much research effort has been undertaken to improve the identification of the PIZ and to elucidate the biochemical mechanisms that drive tissue damage in the PIZ in the hope of identify new therapeutic targets. Despite this effort, few therapies have evolved, attributed in part, to an incomplete understanding of the biochemical mechanisms driving tissue damage in the PIZ. Magnetic resonance imaging (MRI) has long been the gold standard to study alterations in gross brain structure, and is frequently used to study the PIZ following stroke. Unfortunately, MRI does not have sufficient spatial resolution to study individual cells within the brain, and reveals little information on the biochemical mechanisms driving tissue damage. MRI results may be complemented with histology or immuno-histochemistry to provide information at the cellular or sub-cellular level, but are limited to studying biochemical markers that can be successfully "tagged" with a stain or antigen. However, many important biochemical markers cannot be studied with traditional MRI or histology/histochemical methods. Therefore, we have developed and applied a multi-modal imaging platform to reveal elemental and molecular alterations that could not previously be imaged by other traditional methods. Our imaging platform incorporates a suite of spectroscopic imaging techniques; Fourier transform infrared imaging, Raman spectroscopic imaging, Coherent anti-stoke Raman spectroscopic imaging and X-ray fluorescence imaging. This approach does not preclude the use of

  5. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  6. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while less than…

  7. The Open Source Teaching Project (OSTP): Research Note.

    ERIC Educational Resources Information Center

    Hirst, Tony

    The Open Source Teaching Project (OSTP) is an attempt to apply a variant of the successful open source software approach to the development of educational materials. Open source software is software licensed in such a way as to allow anyone the right to modify and use it. From such a simple premise, a whole industry has arisen, most notably in the…

  8. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    ERIC Educational Resources Information Center

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  9. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  10. Open Source Software to Control Bioflo Bioreactors

    PubMed Central

    Burdge, David A.; Libourel, Igor G. L.

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW. PMID:24667828

  11. Open source software to control Bioflo bioreactors.

    PubMed

    Burdge, David A; Libourel, Igor G L

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW. PMID:24667828

  12. Spatial rainfall data in open source environment

    NASA Astrophysics Data System (ADS)

    Schuurmans, Hanneke; Maarten Verbree, Jan; Leijnse, Hidde; van Heeringen, Klaas-Jan; Uijlenhoet, Remko; Bierkens, Marc; van de Giesen, Nick; Gooijer, Jan; van den Houten, Gert

    2013-04-01

    Since January 2013 The Netherlands have access to innovative high-quality rainfall data that is used for watermanagers. This product is innovative because of the following reasons. (i) The product is developed in a 'golden triangle' construction - corporation between government, business and research. (ii) Second the rainfall products are developed according to the open-source GPL license. The initiative comes from a group of water boards in the Netherlands that joined their forces to fund the development of a new rainfall product. Not only data from Dutch radar stations (as is currently done by the Dutch meteorological organization KNMI) is used but also data from radars in Germany and Belgium. After a radarcomposite is made, it is adjusted according to data from raingauges (ground truth). This results in 9 different rainfall products that give for each moment the best rainfall data. Specific knowledge is necessary to develop these kind of data. Therefore a pool of experts (KNMI, Deltares and 3 universities) participated in the development. The philosophy of the developers (being corporations) is that products like this should be developed in open source. This way knowledge is shared and the whole community is able to make suggestions for improvement. In our opinion this is the only way to make real progress in product development. Furthermore the financial resources of government organizations are optimized. More info (in Dutch): www.nationaleregenradar.nl

  13. Developing an Open Source Option for NASA Software

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Parks, John W. (Technical Monitor)

    2003-01-01

    We present arguments in favor of developing an Open Source option for NASA software; in particular we discuss how Open Source is compatible with NASA's mission. We compare and contrast several of the leading Open Source licenses, and propose one - the Mozilla license - for use by NASA. We also address some of the related issues for NASA with respect to Open Source. In particular, we discuss some of the elements in the External Release of NASA Software document (NPG 2210.1A) that will likely have to be changed in order to make Open Source a reality withm the agency.

  14. Cross-platform learning: on the nature of children's learning from multiple media platforms.

    PubMed

    Fisch, Shalom M

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several recent studies to explore cross-platform learning (i.e., learning from combined use of multiple media platforms) and how such learning compares to learning from one medium. The paper discusses unique benefits of cross-platform learning, a theoretical mechanism to explain how these benefits might arise, and questions for future research in this emerging field. PMID:23483694

  15. Cross-Platform JavaScript Coding: Shifting Sand Dunes and Shimmering Mirages.

    ERIC Educational Resources Information Center

    Merchant, David

    1999-01-01

    Most libraries don't have the resources to cross-platform and cross-version test all of their JavaScript coding. Many turn to WYSIWYG; however, WYSIWYG editors don't generally produce optimized coding. Web developers should: test their coding on at least one 3.0 browser, code by hand using tools to help speed that process up, and include a simple…

  16. An open source simulator for water management

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Selby, Philip; Mohammed, Khaled; Khadem, Majed; Padula, Silvia; Harou, Julien; Rosenberg, David; Rheinheimer, David

    2015-04-01

    Descriptive modelling of water resource systems requires the representation of different aspects in one model: the physical system including hydrological inputs and engineered infrastructure, and human management, including social, economic and institutional behaviours and constraints. Although most water resource systems share some characteristics such as the ability to represent them as a network of nodes and links, geographical, institutional and other differences mean that invariably each water system functions in a unique way. A diverse group is developing an open source simulation framework which will allow model developers to build generalised water management models that are customised to the institutional, physical and economical components they are seeking to model. The framework will allow the simulation of complex individual and institutional behaviour required for the assessment of real-world resource systems. It supports the spatial and hierarchical structures commonly found in water resource systems. The individual infrastructures can be operated by different actors while policies are defined at a regional level by one or more institutional actors. The framework enables building multi-agent system simulators in which developers can define their own agent types and add their own decision making code. Developers using the framework have two main tasks: (i) Extend the core classes to represent the aspects of their particular system, and (ii) write model structure files. Both are done in Python. For task one, users must either write new decision making code for each class or link to an existing code base to provide functionality to each of these extension classes. The model structure file links these extension classes in a standardised way to the network topology. The framework will be open-source and written in Python and is to be available directly for download through standard installer packages. Many water management model developers are unfamiliar

  17. Comparison of Sleep-Wake Classification using Electroencephalogram and Wrist-worn Multi-modal Sensor Data

    PubMed Central

    Sano, Akane; Picard, Rosalind W.

    2015-01-01

    This paper presents the comparison of sleep-wake classification using electroencephalogram (EEG) and multi-modal data from a wrist wearable sensor. We collected physiological data while participants were in bed: EEG, skin conductance (SC), skin temperature (ST), and acceleration (ACC) data, from 15 college students, computed the features and compared the intra-/inter-subject classification results. As results, EEG features showed 83% while features from a wrist wearable sensor showed 74% and the combination of ACC and ST played more important roles in sleep/wake classification. PMID:25570112

  18. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  19. Evaluation of Game Engines for Cross-Platform Development of Mobile Serious Games for Health.

    PubMed

    Kleinschmidt, Carina; Haag, Martin

    2016-01-01

    Studies have shown that serious games for health can improve patient compliance and help to increase the quality of medical education. Due to a growing availability of mobile devices, especially the development of cross-platform mobile apps is helpful for improving healthcare. As the development can be highly time-consuming and expensive, an alternative development process is needed. Game engines are expected to simplify this process. Therefore, this article examines the question whether using game engines for cross-platform serious games for health can simplify the development compared to the development of a plain HTML5 app. At first, a systematic review of the literature was conducted in different databases (MEDLINE, ACM and IEEE). Afterwards three different game engines were chosen, evaluated in different categories and compared to the development of a HTML5 app. This was realized by implementing a prototypical application in the different engines and conducting a utility analysis. The evaluation shows that the Marmalade engine is the best choice for development in this scenario. Furthermore, it is obvious that the game engines have great benefits against plain HTML5 development as they provide components for graphics, physics, sounds, etc. The authors recommend to use the Marmalade Engine for a cross-platform mobile Serious Game for Health. PMID:27139405

  20. XMS: Cross-Platform Normalization Method for Multimodal Mass Spectrometric Tissue Profiling

    NASA Astrophysics Data System (ADS)

    Golf, Ottmar; Muirhead, Laura J.; Speller, Abigail; Balog, Júlia; Abbassi-Ghadi, Nima; Kumar, Sacheen; Mróz, Anna; Veselkov, Kirill; Takáts, Zoltán

    2015-01-01

    Here we present a proof of concept cross-platform normalization approach to convert raw mass spectra acquired by distinct desorption ionization methods and/or instrumental setups to cross-platform normalized analyte profiles. The initial step of the workflow is database driven peak annotation followed by summarization of peak intensities of different ions from the same molecule. The resulting compound-intensity spectra are adjusted to a method-independent intensity scale by using predetermined, compound-specific normalization factors. The method is based on the assumption that distinct MS-based platforms capture a similar set of chemical species in a biological sample, though these species may exhibit platform-specific molecular ion intensity distribution patterns. The method was validated on two sample sets of (1) porcine tissue analyzed by laser desorption ionization (LDI), desorption electrospray ionization (DESI), and rapid evaporative ionization mass spectrometric (REIMS) in combination with Fourier transformation-based mass spectrometry; and (2) healthy/cancerous colorectal tissue analyzed by DESI and REIMS with the latter being combined with time-of-flight mass spectrometry. We demonstrate the capacity of our method to reduce MS-platform specific variation resulting in (1) high inter-platform concordance coefficients of analyte intensities; (2) clear principal component based clustering of analyte profiles according to histological tissue types, irrespective of the used desorption ionization technique or mass spectrometer; and (3) accurate "blind" classification of histologic tissue types using cross-platform normalized analyte profiles.

  1. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications.

    PubMed

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  2. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  3. An Affordable Open-Source Turbidimeter

    PubMed Central

    Kelley, Christopher D.; Krolick, Alexander; Brunner, Logan; Burklund, Alison; Kahn, Daniel; Ball, William P.; Weber-Shirk, Monroe

    2014-01-01

    Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring. PMID:24759114

  4. JSMAA: open source software for SMAA computations

    NASA Astrophysics Data System (ADS)

    Tervonen, Tommi

    2014-01-01

    Most software for multi-criteria decision analysis (MCDA) implement a small set of compatible methods as a closed monolithic program. With such software tools, the decision models have to be input by hand. In some applications, however, the model can be generated using external information sources, and thus it would be beneficial if the MCDA software could integrate in the comprehensive information infrastructure. This article motivates for the need of model generation in the methodological context of stochastic multicriteria acceptability analysis (SMAA), and describes the JSMAA software that implements SMAA-2, SMAA-O and SMAA-TRI methods. JSMAA is an open source and divided in separate graphical user interface and library components, enabling its use in systems with a model generation subsystem.

  5. An affordable open-source turbidimeter.

    PubMed

    Kelley, Christopher D; Krolick, Alexander; Brunner, Logan; Burklund, Alison; Kahn, Daniel; Ball, William P; Weber-Shirk, Monroe

    2014-01-01

    Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring. PMID:24759114

  6. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  7. Open Source Hardware for DIY Environmental Sensing

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  8. An open source business model for malaria.

    PubMed

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  9. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and

  10. Open-Source as a strategy for operational software - the case of Enki

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has

  11. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  12. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  13. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

    NASA Astrophysics Data System (ADS)

    Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

    2015-03-01

    Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

  14. Differences in Multi-Modal Ultrasound Imaging between Triple Negative and Non-Triple Negative Breast Cancer.

    PubMed

    Li, Ziyao; Tian, Jiawei; Wang, Xiaowei; Wang, Ying; Wang, Zhenzhen; Zhang, Lei; Jing, Hui; Wu, Tong

    2016-04-01

    The objective of this study was to identify multi-modal ultrasound imaging parameters that could potentially help to differentiate between triple negative breast cancer (TNBC) and non-TNBC. Conventional ultrasonography, ultrasound strain elastography and 3-D ultrasound (3-D-US) findings from 50 TNBC and 179 non-TNBC patients were retrospectively reviewed. Immunohistochemical examination was used as the reference gold standard for cancer subtyping. Different ultrasound modalities were initially analyzed to define TNBC-related features. Subsequently, logistic regression analysis was applied to TNBC-related features to establish models for predicting TNBC. TNBCs often presented as micro-lobulated, markedly hypo-echoic masses with an abrupt interface (p = 0.015, 0.0015 and 0.004, compared with non-TNBCs, respectively) on conventional ultrasound, and showed a diminished retraction pattern phenomenon in the coronal plane (p = 0.035) on 3-D-US. Our findings suggest that B-mode ultrasound and 3-D-US in multi-modality ultrasonography could be a useful non-invasive technique for differentiating TNBCs from non-TNBCs. PMID:26786891

  15. A Systems Biology Consideration of the Vasculopathy of Sickle Cell Anemia: The Need for Multi-Modality Chemo-Prophylaxis

    PubMed Central

    Hebbel, Robert P.; Vercellotti, Greg M.; Nath, Karl A.

    2010-01-01

    Much of the morbidity and mortality of sickle cell anemia is accounted for by a chronic vasculopathy syndrome. There is currently no identified therapy, interventional or prophylactic, for this problem. For two reasons, development of an effective therapeutic approach will require a systems biology level perspective on the vascular pathobiology of sickle disease. In the first place, multiple biological processes contribute to the pathogenesis of vasculopathy: red cell sickling, inflammation and adhesion biology, coagulation activation, stasis, deficient bioavailability and excessive consumption of NO, excessive oxidation, and reperfusion injury physiology. The probable hierarchy of involvement of these disparate sub-biologies places inflammation caused by reperfusion injury physiology as the likely, proximate, linking pathophysiological factor. In the second place, most of these sub-biologies overlap with each other and, in any case, have multiple points of potential interaction and transactivation. Consequently, an approach modeled upon chemotherapy for cancer is needed. This would be a truly multi-modality approach that hopefully could be achieved via employment of relatively few drugs. It is proposed here that the specific combination of a statin with suberoylanilide hydroxamic acid would provide a suitable, broad, multi-modality approach to chemo-prophylaxis for sickle vasculopathy. PMID:19751187

  16. The Open Source Snowpack modelling ecosystem

    NASA Astrophysics Data System (ADS)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  17. Open Source Testing Capability for Geospatial Software

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2013-12-01

    resource for technologists responsible for interoperability among scientific tools that are used for sharing data and linking models, both within and between Earth science disciplines. This presentation will focus on the OGC compliance infrastructure and its open source tools, open source tests and and open issue tracker that can be used to improve scientific software. [1] http://www.opengeospatial.org/resource/products/stats [2] http://cite.opengeospatial.org/teamengine/ [3] http://cite.opengeospatial.org/te2

  18. Open Source Software Development Models—A State of Art

    NASA Astrophysics Data System (ADS)

    Kaur, Parminder; Singh, Hardeep

    2011-12-01

    The objective of Open Source as well as Free Software is to encourage the involvement in the form of improvement, modification and distribution of the licensed work. Open source software proved itself highly suited, both as a software product and as a development methodology. The Open source software development model supports all aspects of various processes like defining requirements, system—level design, detailed design, implementation, integration, field testing, and support in order to produce high quality products implementing client requirements. This paper analysis open source development models on the basis of common attributes like parallel development, peer review, prompt feedback to user, parallel debugging, user involvement, and developer contributions.

  19. The case for open-source software in drug discovery.

    PubMed

    DeLano, Warren L

    2005-02-01

    Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options. PMID:15708536

  20. Closed-Loop, Open-Source Electrophysiology

    PubMed Central

    Rolston, John D.; Gross, Robert E.; Potter, Steve M.

    2010-01-01

    Multiple extracellular microelectrodes (multi-electrode arrays, or MEAs) effectively record rapidly varying neural signals, and can also be used for electrical stimulation. Multi-electrode recording can serve as artificial output (efferents) from a neural system, while complex spatially and temporally targeted stimulation can serve as artificial input (afferents) to the neuronal network. Multi-unit or local field potential (LFP) recordings can not only be used to control real world artifacts, such as prostheses, computers or robots, but can also trigger or alter subsequent stimulation. Real-time feedback stimulation may serve to modulate or normalize aberrant neural activity, to induce plasticity, or to serve as artificial sensory input. Despite promising closed-loop applications, commercial electrophysiology systems do not yet take advantage of the bidirectional capabilities of multi-electrodes, especially for use in freely moving animals. We addressed this lack of tools for closing the loop with NeuroRighter, an open-source system including recording hardware, stimulation hardware, and control software with a graphical user interface. The integrated system is capable of multi-electrode recording and simultaneous patterned microstimulation (triggered by recordings) with minimal stimulation artifact. The potential applications of closed-loop systems as research tools and clinical treatments are broad; we provide one example where epileptic activity recorded by a multi-electrode probe is used to trigger targeted stimulation, via that probe, to freely moving rodents. PMID:20859448

  1. Closed-loop, open-source electrophysiology.

    PubMed

    Rolston, John D; Gross, Robert E; Potter, Steve M

    2010-01-01

    Multiple extracellular microelectrodes (multi-electrode arrays, or MEAs) effectively record rapidly varying neural signals, and can also be used for electrical stimulation. Multi-electrode recording can serve as artificial output (efferents) from a neural system, while complex spatially and temporally targeted stimulation can serve as artificial input (afferents) to the neuronal network. Multi-unit or local field potential (LFP) recordings can not only be used to control real world artifacts, such as prostheses, computers or robots, but can also trigger or alter subsequent stimulation. Real-time feedback stimulation may serve to modulate or normalize aberrant neural activity, to induce plasticity, or to serve as artificial sensory input. Despite promising closed-loop applications, commercial electrophysiology systems do not yet take advantage of the bidirectional capabilities of multi-electrodes, especially for use in freely moving animals. We addressed this lack of tools for closing the loop with NeuroRighter, an open-source system including recording hardware, stimulation hardware, and control software with a graphical user interface. The integrated system is capable of multi-electrode recording and simultaneous patterned microstimulation (triggered by recordings) with minimal stimulation artifact. The potential applications of closed-loop systems as research tools and clinical treatments are broad; we provide one example where epileptic activity recorded by a multi-electrode probe is used to trigger targeted stimulation, via that probe, to freely moving rodents. PMID:20859448

  2. An open-source laser electronics suite

    NASA Astrophysics Data System (ADS)

    Pisenti, Neal C.; Reschovsky, Benjamin J.; Barker, Daniel S.; Restelli, Alessandro; Campbell, Gretchen K.

    2016-05-01

    We present an integrated set of open-source electronics for controlling external-cavity diode lasers and other instruments in the laboratory. The complete package includes a low-noise circuit for driving high-voltage piezoelectric actuators, an ultra-stable current controller based on the design of, and a high-performance, multi-channel temperature controller capable of driving thermo-electric coolers or resistive heaters. Each circuit (with the exception of the temperature controller) is designed to fit in a Eurocard rack equipped with a low-noise linear power supply capable of driving up to 5 A at +/- 15 V. A custom backplane allows signals to be shared between modules, and a digital communication bus makes the entire rack addressable by external control software over TCP/IP. The modular architecture makes it easy for additional circuits to be designed and integrated with existing electronics, providing a low-cost, customizable alternative to commercial systems without sacrificing performance.

  3. Open-source solutions for SPIMage processing.

    PubMed

    Schmied, Christopher; Stamataki, Evangelia; Tomancak, Pavel

    2014-01-01

    Light sheet microscopy is an emerging technique allowing comprehensive visualization of dynamic biological processes, at high spatial and temporal resolution without significant damage to the sample by the imaging process itself. It thus lends itself to time-lapse observation of fluorescently labeled molecular markers over long periods of time in a living specimen. In combination with sample rotation light sheet microscopy and in particular its selective plane illumination microscopy (SPIM) flavor, enables imaging of relatively large specimens, such as embryos of animal model organisms, in their entirety. The benefits of SPIM multiview imaging come to the cost of image data postprocessing necessary to deliver the final output that can be analyzed. Here, we provide a set of practical recipes that walk biologists through the complex processes of SPIM data registration, fusion, deconvolution, and time-lapse registration using publicly available open-source tools. We explain, in plain language, the basic principles behind SPIM image-processing algorithms that should enable users to make informed decisions during parameter tuning of the various processing steps applied to their own datasets. Importantly, the protocols presented here are applicable equally to processing of multiview SPIM data from the commercial Zeiss Lightsheet Z.1 microscope and from the open-access SPIM platforms such as OpenSPIM. PMID:24974045

  4. XNAT Central: Open sourcing imaging research data.

    PubMed

    Herrick, Rick; Horton, William; Olsen, Timothy; McKay, Michael; Archie, Kevin A; Marcus, Daniel S

    2016-01-01

    XNAT Central is a publicly accessible medical imaging data repository based on the XNAT open-source imaging informatics platform. It hosts a wide variety of research imaging data sets. The primary motivation for creating XNAT Central was to provide a central repository to host and provide access to a wide variety of neuroimaging data. In this capacity, XNAT Central hosts a number of data sets from research labs and investigative efforts from around the world, including the OASIS Brains imaging studies, the NUSDAST study of schizophrenia, and more. Over time, XNAT Central has expanded to include imaging data from many different fields of research, including oncology, orthopedics, cardiology, and animal studies, but continues to emphasize neuroimaging data. Through the use of XNAT's DICOM metadata extraction capabilities, XNAT Central provides a searchable repository of imaging data that can be referenced by groups, labs, or individuals working in many different areas of research. The future development of XNAT Central will be geared towards greater ease of use as a reference library of heterogeneous neuroimaging data and associated synthetic data. It will also become a tool for making data available supporting published research and academic articles. PMID:26143202

  5. Zherlock: an open source data analysis software.

    PubMed

    Alsberg, B K; Kirkhus, L; Hagen, R; Knudsen, O; Tangstad, T; Anderssen, E

    2003-01-01

    Zherlock is an open source software that provides state-of-the-art data analysis tools to the user in an intuitive and flexible way. It is a front-end to different numerical "engines" to produce a seamless integration of algorithms written in different computer languages. Of particular interest is creating an interface to high-level scientific languages such as Octave (a Matlab clone) and R (an S-PLUS clone) to enable efficient porting of new data analytical methods. Zherlock uses advanced scientific visualization tools in 2-D and 3-D and has been extended to work on virtual reality (VR) systems. Central to Zherlock is a visual programming environment (VPE) which enables diagram based programming. These diagrams consist of nodes and connection lines where each node is an operator or a method and lines describe the flow of data between nodes. A VPE is chosen for Zherlock because it forms an effective way to control the processing pipeline in complex data analyses. The VPE is similar in functionality to other programs such as IRIS Explorer, AVS or LabVIEW. PMID:14758979

  6. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    SciTech Connect

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  7. Real Space Multigrid (RMG) Open Source Software Suite for Multi-Petaflops Electronic Structure Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry; Li, Yan

    RMG is a cross platform open source package for ab initio electronic structure calculations that uses real-space grids, multigrid pre-conditioning, and subspace diagonalization to solve the Kohn-Sham equations. The code has been successfully used for a wide range of problems ranging from complex bulk materials to multifunctional electronic devices and biological systems. RMG makes efficient use of GPU accelerators, if present, but does not require them. Recent work has extended GPU support to systems with multiple GPU's per computational node, as well as optimized both CPU and GPU memory usage to enable large problem sizes, which are no longer limited by the memory of the GPU board. Additional enhancements include increased portability, scalability and performance. New versions of the code are regularly released at sourceforge.net/projects/rmgdft/. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms.

  8. Interactive multicentre teleconferences using open source software in a team of thoracic surgeons.

    PubMed

    Ito, Kazuhiro; Shimada, Junichi; Katoh, Daishiro; Nishimura, Motohiro; Yanada, Masashi; Okada, Satoru; Ishihara, Shunta; Ichise, Kaori

    2012-12-01

    Real-time consultation between a team of thoracic surgeons is important for the management of difficult cases. We established a system for interactive teleconsultation between multiple sites, based on open-source software. The graphical desktop-sharing system VNC (virtual network computing) was used for remotely controlling another computer. An image-processing package (OsiriX) was installed on the server to share the medical images. We set up a voice communication system using Voice Chatter, a free, cross-platform voice communication application. Four hospitals participated in the trials. One was connected by gigabit ethernet, one by WiMAX and one by ADSL. Surgeons at three of the sites found that it was comfortable to view images and consult with each other using the teleconferencing system. However, it was not comfortable using the client that connected via WiMAX, because of dropped frames. Apart from the WiMAX connection, the VNC-based screen-sharing system transferred the clinical images efficiently and in real time. We found the screen-sharing software VNC to be a good application for medical image interpretation, especially for a team of thoracic surgeons using multislice CT scans. PMID:23209271

  9. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  10. Design and Development of an Open Source Software Application for the Characterization of Spatially Variable Fields

    NASA Astrophysics Data System (ADS)

    Gunnell, D. K.; Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    The characterization of the structural parameters of spatially variable fields (SVFs) is essential to understanding the variability of hydrological processes such as infiltration, evapotranspiration, groundwater contaminant transport, etc. SVFs can be characterized using a Bayesian inverse method called Method of Anchored Distributions (MAD). This method characterizes the structural parameters of SVFs using prior information of structural parameter fields, indirect measurements, and simulation models allowing the transfer of valuable information to a target variable field. An example SVF in hydrology is hydraulic conductivity, which may be characterized by head pressure measurements through a simulation model such as MODFLOW. This poster will present the design and development of a free and open source inverse modeling desktop software application and extension framework called MAD# for the characterization of the structural parameters of SVFs using MAD. The developed software is designed with a flexible architecture to support different simulation models and random field generators and includes geographic information system (GIS) interfaces for representing, analyzing, and understanding SVFs. This framework has also been made compatible with Mono, a cross-platform implementation of C#, for a wider usability.

  11. An Open Source Business Model for Malaria

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, ‘closed’ publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more “open source” approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.’ President’s Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new

  12. Modeling most likely pathways for smuggling radioactive and special nuclear materials on a worldwide multi-modal transportation network

    SciTech Connect

    Saeger, Kevin J; Cuellar, Leticia

    2010-10-28

    Nuclear weapons proliferation is an existing and growing worldwide problem. To help with devising strategies and supporting decisions to interdict the transport of nuclear material, we developed the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) that provides an analytical approach for evaluating the probability that an adversary smuggling radioactive or special nuclear material will be detected during transit. We incorporate a global, multi-modal transportation network, explicit representation of designed and serendipitous detection opportunities, and multiple threat devices, material types, and shielding levels. This paper presents the general structure of PATRIOT, all focuses on the theoretical framework used to model the reliabilities of all network components that are used to predict the most likely pathways to the target.

  13. Development of EndoTOFPET-US, a multi-modal endoscope for ultrasound and time of flight positron emission tomography

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.

    2014-02-01

    The EndoTOFPET-US project aims at delevoping a multi-modal imaging device that combines Ultrasound with Time-Of-Flight Positron Emission Tomography into an endoscopic imaging device. The goal is to obtain a coincidence time resolution of about 200 ps FWHM and sub-millimetric spatial resolution for the PET head, integrating the components in a very compact detector suitable for endoscopic use. The scanner will be exploited for the clinical test of new bio-markers especially targeted for prostate and pancreatic cancer as well as for diagnostic and surgical oncology. This paper focuses on the status of the Time-Of-Flight Positron Emission Tomograph under development for the EndoTOFPET-US project.

  14. Open Source Library Management Systems: A Multidimensional Evaluation

    ERIC Educational Resources Information Center

    Balnaves, Edmund

    2008-01-01

    Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…

  15. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    ERIC Educational Resources Information Center

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  16. Integrating an Automatic Judge into an Open Source LMS

    ERIC Educational Resources Information Center

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  17. Open-Source Unionism: New Workers, New Strategies

    ERIC Educational Resources Information Center

    Schmid, Julie M.

    2004-01-01

    In "Open-Source Unionism: Beyond Exclusive Collective Bargaining," published in fall 2002 in the journal Working USA, labor scholars Richard B. Freeman and Joel Rogers use the term "open-source unionism" to describe a form of unionization that uses Web technology to organize in hard-to-unionize workplaces. Rather than depend on the traditional…

  18. Migrations of the Mind: The Emergence of Open Source Education

    ERIC Educational Resources Information Center

    Glassman, Michael; Bartholomew, Mitchell; Jones, Travis

    2011-01-01

    The authors describe an Open Source approach to education. They define Open Source Education (OSE) as a teaching and learning framework where the use and presentation of information is non-hierarchical, malleable, and subject to the needs and contributions of students as they become "co-owners" of the course. The course transforms itself into an…

  19. Open Source Course Management Systems: A Case Study

    ERIC Educational Resources Information Center

    Remy, Eric

    2005-01-01

    In Fall 2003, Randolph-Macon Woman's College rolled out Claroline, an Open Source course management system for all the classes on campus. This document will cover some background on both Open Source in general and course management systems in specific, discuss technical challenges in the introduction and integration of the system and give some…

  20. Open Source as Appropriate Technology for Global Education

    ERIC Educational Resources Information Center

    Carmichael, Patrick; Honour, Leslie

    2002-01-01

    Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing dependency…

  1. Getting Open Source Software into Schools: Strategies and Challenges

    ERIC Educational Resources Information Center

    Hepburn, Gary; Buley, Jan

    2006-01-01

    In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…

  2. Open Source Initiative Powers Real-Time Data Streams

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  3. Open Source for Knowledge and Learning Management: Strategies beyond Tools

    ERIC Educational Resources Information Center

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2007-01-01

    In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies. This book…

  4. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  5. Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation.

    PubMed

    Wang, Hongzhi; Yushkevich, Paul A

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  6. VEGF-loaded graphene oxide as theranostics for multi-modality imaging-monitored targeting therapeutic angiogenesis of ischemic muscle

    NASA Astrophysics Data System (ADS)

    Sun, Zhongchan; Huang, Peng; Tong, Guang; Lin, Jing; Jin, Albert; Rong, Pengfei; Zhu, Lei; Nie, Liming; Niu, Gang; Cao, Feng; Chen, Xiaoyuan

    2013-07-01

    Herein we report the design and synthesis of multifunctional VEGF-loaded IR800-conjugated graphene oxide (GO-IR800-VEGF) for multi-modality imaging-monitored therapeutic angiogenesis of ischemic muscle. The as-prepared GO-IR800-VEGF positively targets VEGF receptors, maintains an elevated level of VEGF in ischemic tissues for a prolonged time, and finally leads to remarkable therapeutic angiogenesis of ischemic muscle. Although more efforts are required to further understand the in vivo behaviors and the long-term toxicology of GO, our work demonstrates the success of using GO for efficient VEGF delivery in vivo by intravenous administration and suggests the great promise of using graphene oxide in theranostic applications for treating ischemic disease.Herein we report the design and synthesis of multifunctional VEGF-loaded IR800-conjugated graphene oxide (GO-IR800-VEGF) for multi-modality imaging-monitored therapeutic angiogenesis of ischemic muscle. The as-prepared GO-IR800-VEGF positively targets VEGF receptors, maintains an elevated level of VEGF in ischemic tissues for a prolonged time, and finally leads to remarkable therapeutic angiogenesis of ischemic muscle. Although more efforts are required to further understand the in vivo behaviors and the long-term toxicology of GO, our work demonstrates the success of using GO for efficient VEGF delivery in vivo by intravenous administration and suggests the great promise of using graphene oxide in theranostic applications for treating ischemic disease. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr01573d

  7. Distributed flow estimation and closed-loop control of an underwater vehicle with a multi-modal artificial lateral line.

    PubMed

    DeVries, Levi; Lagor, Francis D; Lei, Hong; Tan, Xiaobo; Paley, Derek A

    2015-04-01

    Bio-inspired sensing modalities enhance the ability of autonomous vehicles to characterize and respond to their environment. This paper concerns the lateral line of cartilaginous and bony fish, which is sensitive to fluid motion and allows fish to sense oncoming flow and the presence of walls or obstacles. The lateral line consists of two types of sensing modalities: canal neuromasts measure approximate pressure gradients, whereas superficial neuromasts measure local flow velocities. By employing an artificial lateral line, the performance of underwater sensing and navigation strategies is improved in dark, cluttered, or murky environments where traditional sensing modalities may be hindered. This paper presents estimation and control strategies enabling an airfoil-shaped unmanned underwater vehicle to assimilate measurements from a bio-inspired, multi-modal artificial lateral line and estimate flow properties for feedback control. We utilize potential flow theory to model the fluid flow past a foil in a uniform flow and in the presence of an upstream obstacle. We derive theoretically justified nonlinear estimation strategies to estimate the free stream flowspeed, angle of attack, and the relative position of an upstream obstacle. The feedback control strategy uses the estimated flow properties to execute bio-inspired behaviors including rheotaxis (the tendency of fish to orient upstream) and station-holding (the tendency of fish to position behind an upstream obstacle). A robotic prototype outfitted with a multi-modal artificial lateral line composed of ionic polymer metal composite and embedded pressure sensors experimentally demonstrates the distributed flow sensing and closed-loop control strategies. PMID:25807584

  8. Scalability of a cross-platform multi-threaded non-sequential optical ray tracer

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2011-10-01

    The GelOE optical engineering software implements multi-threaded ray tracing with just a few simple cross-platform OpenMP directives. Timings as a function of the number of threads are presented for two quite different ZEMAX non-sequential sample problems running on a dual-boot 12-core Apple computer and compared to not only ZEMAX but also FRED (plus single-threaded ASAP and CodeV). Also discussed are the relative merits of using Mac OSX or Windows 7, 32-bit or 64-bit mode, single or double precision floats, and the Intel or GCC compilers. It is found that simple cross-platform multi-threading can be more efficient than the Windows-specific kind used in the commercial codes and who's the fastest ray tracer depends on the specific problem. Note that besides ray trace speed, overall productivity also depends on other things like visualization, ease-of-use, documentation, and technical support of which none are rated here.

  9. OpenADR Open Source Toolkit: Developing Open Source Software for the Smart Grid

    SciTech Connect

    McParland, Charles

    2011-02-01

    Demand response (DR) is becoming an increasingly important part of power grid planning and operation. The advent of the Smart Grid, which mandates its use, further motivates selection and development of suitable software protocols to enable DR functionality. The OpenADR protocol has been developed and is being standardized to serve this goal. We believe that the development of a distributable, open source implementation of OpenADR will benefit this effort and motivate critical evaluation of its capabilities, by the wider community, for providing wide-scale DR services

  10. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires

  11. ENKI - An Open Source environmental modelling platfom

    NASA Astrophysics Data System (ADS)

    Kolberg, S.; Bruland, O.

    2012-04-01

    The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface

  12. PyEPL: a cross-platform experiment-programming library.

    PubMed

    Geller, Aaron S; Schlefer, Ian K; Sederberg, Per B; Jacobs, Joshua; Kahana, Michael J

    2007-11-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments forspatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL. PMID:18183912

  13. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  14. Multi-modal analysis of aerosol robotic network size distributions for remote sensing applications: dominant aerosol type cases

    NASA Astrophysics Data System (ADS)

    Taylor, M.; Kazadzis, S.; Gerasopoulos, E.

    2014-03-01

    To date, size distributions obtained from the aerosol robotic network (AERONET) have been fit with bi-lognormals defined by six secondary microphysical parameters: the volume concentration, effective radius, and the variance of fine and coarse particle modes. However, since the total integrated volume concentration is easily calculated and can be used as an accurate constraint, the problem of fitting the size distribution can be reduced to that of deducing a single free parameter - the mode separation point. We present a method for determining the mode separation point for equivalent-volume bi-lognormal distributions based on optimization of the root mean squared error and the coefficient of determination. The extracted secondary parameters are compared with those provided by AERONET's Level 2.0 Version 2 inversion algorithm for a set of benchmark dominant aerosol types, including desert dust, biomass burning aerosol, urban sulphate and sea salt. The total volume concentration constraint is then also lifted by performing multi-modal fits to the size distribution using nested Gaussian mixture models, and a method is presented for automating the selection of the optimal number of modes using a stopping condition based on Fisher statistics and via the application of statistical hypothesis testing. It is found that the method for optimizing the location of the mode separation point is independent of the shape of the aerosol volume size distribution (AVSD), does not require the existence of a local minimum in the size interval 0.439 μm ≤ r ≤ 0.992 μm, and shows some potential for optimizing the bi-lognormal fitting procedure used by AERONET particularly in the case of desert dust aerosol. The AVSD of impure marine aerosol is found to require three modes. In this particular case, bi-lognormals fail to recover key features of the AVSD. Fitting the AVSD more generally with multi-modal models allows automatic detection of a statistically significant number of aerosol

  15. Multi-modal analysis of aerosol robotic network size distributions for remote sensing applications: dominant aerosol type cases

    NASA Astrophysics Data System (ADS)

    Taylor, M.; Kazadzis, S.; Gerasopoulos, E.

    2013-12-01

    To date, size distributions obtained from the aerosol robotic network have been fit with bi-lognormals defined by six secondary microphysical parameters: the volume concentration, effective radius, and the variance of fine and coarse particle modes. However, since the total integrated volume concentration is easily calculated and can be used as an accurate constraint, the problem of fitting the size distribution can be reduced to that of deducing a single free parameter - the mode separation point. We present a method for determining the mode separation point for equivalent-volume bi-lognormal distributions based on optimisation of the root mean squared error and the coefficient of determination. The extracted secondary parameters are compared with those provided by AERONET's Level 2.0 Version 2 inversion algorithm for a set of benchmark dominant aerosol types including: desert dust, biomass burning aerosol, urban sulphate and sea salt. The total volume concentration constraint is then also lifted by performing multi-modal fits to the size distribution using nested Gaussian mixture models and a method is presented for automating the selection of the optimal number of modes using a stopping condition based on Fisher statistics and via the application of statistical hypothesis testing. It is found that the method for optimizing the location of the mode separation point is independent of the shape of the AVSD, does not require the existence of a local minimum in the size interval 0.439 μm ≤ r ≤ 0.992 μm, and shows some potential for optimizing the bi-lognormal fitting procedure used by AERONET particularly in the case of desert dust aerosol. The AVSD of impure marine aerosol is found to require 3 modes. In this particular case, bi-lognormals fail to recover key features of the AVSD. Fitting the AVSD more generally with multi-modal models allows automatic detection of a statistically-significant number of aerosol modes, is applicable to a very diverse range of

  16. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  17. Guidelines for the implementation of an open source information system

    SciTech Connect

    Doak, J.; Howell, J.A.

    1995-08-01

    This work was initially performed for the International Atomic Energy Agency (IAEA) to help with the Open Source Task of the 93 + 2 Initiative; however, the information should be of interest to anyone working with open sources. The authors cover all aspects of an open source information system (OSIS) including, for example, identifying relevant sources, understanding copyright issues, and making information available to analysts. They foresee this document as a reference point that implementors of a system could augment for their particular needs. The primary organization of this document focuses on specific aspects, or components, of an OSIS; they describe each component and often make specific recommendations for its implementation. This document also contains a section discussing the process of collecting open source data and a section containing miscellaneous information. The appendix contains a listing of various providers, producers, and databases that the authors have come across in their research.

  18. Managing Digital Archives Using Open Source Software Tools

    NASA Astrophysics Data System (ADS)

    Barve, S.; Dongare, S.

    2007-10-01

    This paper describes the use of open source software tools such as MySQL and PHP for creating database-backed websites. Such websites offer many advantages over ones built from static HTML pages. This paper will discuss how OSS tools are used and their benefits, and after the successful implementation of these tools how the library took the initiative in implementing an institutional repository using DSpace open source software.

  19. Open Source Software Licenses for Livermore National Laboratory

    SciTech Connect

    Busby, L.

    2000-08-10

    This paper attempts to develop supporting material in an effort to provide new options for licensing Laboratory-created software. Where employees and the Lab wish to release software codes as so-called ''Open Source'', they need, at a minimum, new licensing language for their released products. Several open source software licenses are reviewed to understand their common elements, and develop recommendations regarding new language.

  20. Learning from hackers: open-source clinical trials.

    PubMed

    Dunn, Adam G; Day, Richard O; Mandl, Kenneth D; Coiera, Enrico

    2012-05-01

    Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. A similar gap was addressed in the software industry by their open-source software movement. Here, we examine how the social and technical principles of the movement can guide the growth of an open-source clinical trial community. PMID:22553248

  1. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  2. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  3. Open source electronic health records and chronic disease management

    PubMed Central

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-01-01

    Objective To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). Methods and Materials The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Results Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. Discussion The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. Conclusions The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC. PMID:23813566

  4. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  5. An efficient nano-based theranostic system for multi-modal imaging-guided photothermal sterilization in gastrointestinal tract.

    PubMed

    Liu, Zhen; Liu, Jianhua; Wang, Rui; Du, Yingda; Ren, Jinsong; Qu, Xiaogang

    2015-07-01

    Since understanding the healthy status of gastrointestinal tract (GI tract) is of vital importance, clinical implementation for GI tract-related disease have attracted much more attention along with the rapid development of modern medicine. Here, a multifunctional theranostic system combining X-rays/CT/photothermal/photoacoustic mapping of GI tract and imaging-guided photothermal anti-bacterial treatment is designed and constructed. PEGylated W18O49 nanosheets (PEG-W18O49) are created via a facile solvothermal method and an in situ probe-sonication approach. In terms of excellent colloidal stability, low cytotoxicity, and neglectable hemolysis of PEG-W18O49, we demonstrate the first example of high-performance four-modal imaging of GI tract by using these nanosheets as contrast agents. More importantly, due to their intrinsic absorption of NIR light, glutaraldehyde-modified PEG-W18O49 are successfully applied as fault-free targeted photothermal agents for imaging-guided killing of bacteria on a mouse infection model. Critical to pre-clinical and clinical prospects, long-term toxicity is further investigated after oral administration of these theranostic agents. These kinds of tungsten-based nanomaterials exhibit great potential as multi-modal contrast agents for directed visualization of GI tract and anti-bacterial agents for phothothermal sterilization. PMID:25934293

  6. Advances in longitudinal studies of amnestic mild cognitive impairment and Alzheimer's disease based on multi-modal MRI techniques.

    PubMed

    Hu, Zhongjie; Wu, Liyong; Jia, Jianping; Han, Ying

    2014-04-01

    Amnestic mild cognitive impairment (aMCI) is a prodromal stage of Alzheimer's disease (AD), and 75%-80% of aMCI patients finally develop AD. So, early identification of patients with aMCI or AD is of great significance for prevention and intervention. According to cross-sectional studies, it is known that the hippocampus, posterior cingulate cortex, and corpus callosum are key areas in studies based on structural MRI (sMRI), functional MRI (fMRI), and diffusion tensor imaging (DTI) respectively. Recently, longitudinal studies using each MRI modality have demonstrated that the neuroimaging abnormalities generally involve the posterior brain regions at the very beginning and then gradually affect the anterior areas during the progression of aMCI to AD. However, it is not known whether follow-up studies based on multi-modal neuroimaging techniques (e.g., sMRI, fMRI, and DTI) can help build effective MRI models that can be directly applied to the screening and diagnosis of aMCI and AD. Thus, in the future, large-scale multi-center follow-up studies are urgently needed, not only to build an MRI diagnostic model that can be used on a single person, but also to evaluate the variability and stability of the model in the general population. In this review, we present longitudinal studies using each MRI modality separately, and then discuss the future directions in this field. PMID:24574084

  7. Multi-modal miniature microscope: 4M Device for bio-imaging applications - an overview of the system

    NASA Astrophysics Data System (ADS)

    Tkaczyk, Tomasz S.; Rogers, Jeremy D.; Rahman, Mohammed; Christenson, Todd C.; Gaalema, Stephen; Dereniak, Eustace L.; Richards-Kortum, Rebecca; Descour, Michael R.

    2005-09-01

    The multi-modal miniature microscope (4M) device to image morphology and cytochemistry in vivo is a microscope on a chip including optical, micro-mechanical, and electronic components. This paper describes all major system components: optical system, custom high speed CMOS detector and comb drive actuator. The hybrid sol-gel lenses, their fabrication and assembling technology, optical system parameters, and various operation modes (fluorescence, reflectance, structured illumination) are also discussed. A particularly interesting method is a structured illumination technique that delivers confocal-imaging capabilities and may be used for optical sectioning. For reconstruction of the sectioned layer a sine approximation algorithm is applied. Structured illumination is produced with LIGA fabricated actuator scanning in resonance. The spatial resolution of the system is 1 μm, and was magnified by 4x matching the CMOS pixel size of 4 μm (a lateral magnification is 4:1), and the extent of field of the system is 250μm. An overview of the 4M device is combined with the presentation of imaging results for epithelial cell phantoms with optical properties characteristic of normal and cancerous tissue labeled with nanoparticles.

  8. Single-Step Assembly of Multi-Modal Imaging Nanocarriers: MRI and Long-Wavelength Fluorescence Imaging

    PubMed Central

    Pinkerton, Nathalie M.; Gindy, Marian E.; Calero-DdelC, Victoria L.; Wolfson, Theodore; Pagels, Robert F.; Adler, Derek; Gao, Dayuan; Li, Shike; Wang, Ruobing; Zevon, Margot; Yao, Nan; Pacheco, Carlos; Therien, Michael J.; Rinaldi, Carlos; Sinko, Patrick J.

    2015-01-01

    MRI and NIR-active, multi-modal Composite NanoCarriers (CNCs) are prepared using a simple, one-step process, Flash NanoPrecipitation (FNP). The FNP process allows for the independent control of the hydrodynamic diameter, co-core excipient and NIR dye loading, and iron oxide-based nanocrystal (IONC) content of the CNCs. In the controlled precipitation process, 10 nm IONCs are encapsulated into poly(ethylene glycol) stabilized CNCs to make biocompatible T2 contrast agents. By adjusting the formulation, CNC size is tuned between 80 and 360 nm. Holding the CNC size constant at an intensity weighted average diameter of 99 ± 3 nm (PDI width 28 nm), the particle relaxivity varies linearly with encapsulated IONC content ranging from 66 to 533 mM-1s-1 for CNCs formulated with 4 to 16 wt% IONC. To demonstrate the use of CNCs as in vivo MRI contrast agents, CNCs are surface functionalized with liver targeting hydroxyl groups. The CNCs enable the detection of 0.8 mm3 non-small cell lung cancer metastases in mice livers via MRI. Incorporating the hydrophobic, NIR dye PZn3 into CNCs enables complementary visualization with long-wavelength fluorescence at 800 nm. In vivo imaging demonstrates the ability of CNCs to act both as MRI and fluorescent imaging agents. PMID:25925128

  9. Multi-modal adaptive optics system including fundus photography and optical coherence tomography for the clinical setting

    PubMed Central

    Salas, Matthias; Drexler, Wolfgang; Levecq, Xavier; Lamory, Barbara; Ritter, Markus; Prager, Sonja; Hafner, Julia; Schmidt-Erfurth, Ursula; Pircher, Michael

    2016-01-01

    We present a new compact multi-modal imaging prototype that combines an adaptive optics (AO) fundus camera with AO-optical coherence tomography (OCT) in a single instrument. The prototype allows acquiring AO fundus images with a field of view of 4°x4° and with a frame rate of 10fps. The exposure time of a single image is 10 ms. The short exposure time results in nearly motion artifact-free high resolution images of the retina. The AO-OCT mode allows acquiring volumetric data of the retina at 200kHz A-scan rate with a transverse resolution of ~4 µm and an axial resolution of ~5 µm. OCT imaging is acquired within a field of view of 2°x2° located at the central part of the AO fundus image. Recording of OCT volume data takes 0.8 seconds. The performance of the new system is tested in healthy volunteers and patients with retinal diseases. PMID:27231621

  10. Multi-modal pharmacokinetic modelling for DCE-MRI: using diffusion weighted imaging to constrain the local arterial input function

    NASA Astrophysics Data System (ADS)

    Hamy, Valentin; Modat, Marc; Shipley, Rebecca; Dikaios, Nikos; Cleary, Jon; Punwani, Shonit; Ourselin, Sebastien; Atkinson, David; Melbourne, Andrew

    2014-03-01

    The routine acquisition of multi-modal magnetic resonance imaging data in oncology yields the possibility of combined model fitting of traditionally separate models of tissue structure and function. In this work we hypothesise that diffusion weighted imaging data may help constrain the fitting of pharmacokinetic models to dynamic contrast enhanced (DCE) MRI data. Parameters related to tissue perfusion in the intra-voxel incoherent motion (IVIM) modelling of diffusion weighted MRI provide local information on how tissue is likely to perfuse that can be utilised to guide DCE modelling via local modification of the arterial input function (AIF). In this study we investigate, based on multi-parametric head and neck MRI of 8 subjects (4 with head and neck tumours), the benefit of incorporating parameters derived from the IVIM model within the DCE modelling procedure. Although we find the benefit of this procedure to be marginal on the data used in this work, it is conceivable that a technique of this type will be of greater use in a different application.

  11. Multi-modal adaptive optics system including fundus photography and optical coherence tomography for the clinical setting.

    PubMed

    Salas, Matthias; Drexler, Wolfgang; Levecq, Xavier; Lamory, Barbara; Ritter, Markus; Prager, Sonja; Hafner, Julia; Schmidt-Erfurth, Ursula; Pircher, Michael

    2016-05-01

    We present a new compact multi-modal imaging prototype that combines an adaptive optics (AO) fundus camera with AO-optical coherence tomography (OCT) in a single instrument. The prototype allows acquiring AO fundus images with a field of view of 4°x4° and with a frame rate of 10fps. The exposure time of a single image is 10 ms. The short exposure time results in nearly motion artifact-free high resolution images of the retina. The AO-OCT mode allows acquiring volumetric data of the retina at 200kHz A-scan rate with a transverse resolution of ~4 µm and an axial resolution of ~5 µm. OCT imaging is acquired within a field of view of 2°x2° located at the central part of the AO fundus image. Recording of OCT volume data takes 0.8 seconds. The performance of the new system is tested in healthy volunteers and patients with retinal diseases. PMID:27231621

  12. a Framework for AN Open Source Geospatial Certification Model

    NASA Astrophysics Data System (ADS)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  13. Your Personal Analysis Toolkit - An Open Source Solution

    NASA Astrophysics Data System (ADS)

    Mitchell, T.

    2009-12-01

    Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!

  14. Comparison of open-source linear programming solvers.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  15. Technology collaboration by means of an open source government

    NASA Astrophysics Data System (ADS)

    Berardi, Steven M.

    2009-05-01

    The idea of open source software originally began in the early 1980s, but it never gained widespread support until recently, largely due to the explosive growth of the Internet. Only the Internet has made this kind of concept possible, bringing together millions of software developers from around the world to pool their knowledge. The tremendous success of open source software has prompted many corporations to adopt the culture of open source and thus share information they previously held secret. The government, and specifically the Department of Defense (DoD), could also benefit from adopting an open source culture. In acquiring satellite systems, the DoD often builds walls between program offices, but installing doors between programs can promote collaboration and information sharing. This paper addresses the challenges and consequences of adopting an open source culture to facilitate technology collaboration for DoD space acquisitions. DISCLAIMER: The views presented here are the views of the author, and do not represent the views of the United States Government, United States Air Force, or the Missile Defense Agency.

  16. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  17. Integration of fiber-optic sensor arrays into a multi-modal tactile sensor processing system for robotic end-effectors.

    PubMed

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  18. Introducing StatHand: A Cross-Platform Mobile Application to Support Students' Statistical Decision Making.

    PubMed

    Allen, Peter J; Roberts, Lynne D; Baughman, Frank D; Loxton, Natalie J; Van Rooy, Dirk; Rock, Adam J; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students' statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand. PMID:26973579

  19. Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1999-01-01

    Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.

  20. A cross-platform GUI to control instruments compliant with SCPI through VISA

    NASA Astrophysics Data System (ADS)

    Roach, Eric; Liu, Jing

    2015-10-01

    In nuclear physics experiments, it is necessary and important to control instruments from a PC, which automates many tasks that require human operations otherwise. Not only does this make long term measurements possible, but it also makes repetitive operations less error-prone. We created a graphical user interface (GUI) to control instruments connected to a PC through RS232, USB, LAN, etc. The GUI is developed using Qt Creator, a cross-platform integrated development environment, which makes it portable to various operating systems, including those commonly used in mobile devices. NI-VISA library is used in the back end so that the GUI can be used to control instruments connected through various I/O interfaces without any modification. Commonly used SCPI commands can be sent to different instruments using buttons, sliders, knobs, and other various widgets provided by Qt Creator. As an example, we demonstrate how we set and fetch parameters and how to retrieve and display data from an Agilent Digital Storage Oscilloscope X3034A with the GUI. Our GUI can be easily used for other instruments compliant with SCPI and VISA with little or no modification.

  1. MeTA studio: a cross platform, programmable IDE for computational chemist.

    PubMed

    Ganesh, V

    2009-03-01

    The development of a cross-platform, programmable integrated development environment (IDE), MeTA Studio, specifically tailored but not restricted to computational chemists working in the area of quantum chemistry with an emphasis on handling large molecules is presented. The IDE consists of a number of modules which include a visualizer and a programming and collaborative framework. The inbuilt viewer assists in visualizing molecules, their scalar fields, manually fragmenting a molecule, and introduces some innovative but simple techniques for handling large molecules. These include a simple Find language and simultaneous multiple camera views of the molecule. Basic tools needed to handle collaborative computing effectively are also included opening up new vistas for sharing ideas and information among computational chemists working on similar problems. MeTA Studio is an integrated programming environment that provides a rich set of application programming interfaces (APIs) which can be used to easily extend its functionality or build new applications as needed by the users. (http://code.google.com/p/metastudio/). PMID:18711720

  2. Introducing StatHand: A Cross-Platform Mobile Application to Support Students’ Statistical Decision Making

    PubMed Central

    Allen, Peter J.; Roberts, Lynne D.; Baughman, Frank D.; Loxton, Natalie J.; Van Rooy, Dirk; Rock, Adam J.; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students’ statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand. PMID:26973579

  3. A heart team and multi-modality imaging approach to percutaneous closure of a post-myocardial infarction ventricular septal defect

    PubMed Central

    Iyer, Sunil; Bauer, Thurston; Yeung, Michael; Ramm, Cassandra; Kiser, Andy C.; Caranasos, Thomas G.

    2016-01-01

    Post-infarction ventricular septal defect (PI-VSD) is a devastating complication that carries a high mortality with or without surgical repair. Percutaneous closure is an attractive alternative in select patients though requires appropriate characterization of the PI-VSD as well as careful device and patient selection. We describe a multidisciplinary and multi-modality imaging approach to successful percutaneous closure of a PI-VSD. PMID:27054108

  4. WE-D-9A-04: Improving Multi-Modality Image Registration Using Edge-Based Transformations

    SciTech Connect

    Wang, Y; Tyagi, N; Veeraraghavan, H; Deasy, J

    2014-06-15

    Purpose: Multi-modality deformable image registration (DIR) for head and neck (HN) radiotherapy is difficult, particularly when matching computed tomography (CT) scans with magnetic resonance imaging (MRI) scans. We hypothesized that the ‘shared information’ between images of different modalities was to be found in some form of edge-based transformation, and that novel edge-based DIR methods might outperform standard DIR methods. Methods: We propose a novel method that combines gray-scale edge-based morphology and mutual information (MI) in two stages. In the first step, we applied a modification of a previously published mathematical morphology method as an efficient gray scale edge estimator, with denoising function. The results were fed into a MI-based solver (plastimatch). The method was tested on 5 HN patients with pretreatment CT and MR datasets and associated follow-up weekly MR scans. The followup MRs showed significant regression in tumor and normal structure volumes as compared to the pretreatment MRs. The MR images used in this study were obtained using fast spin echo based T2w images with a 1 mm isotropic resolution and FOV matching the CT scan. Results: In all cases, the novel edge-based registration method provided better registration quality than MI-based DIR using the original CT and MRI images. For example, the mismatch in carotid arteries was reduced from 3–5 mm to within 2 mm. The novel edge-based method with different registration regulation parameters did not show any distorted deformations as compared to the non-realistic deformations resulting from MI on the original images. Processing time was 1.3 to 2 times shorter (edge vs. non-edge). In general, we observed quality improvement and significant calculation time reduction with the new method. Conclusion: Transforming images to an ‘edge-space,’ if designed appropriately, greatly increases the speed and accuracy of DIR.

  5. Expanding neurochemical investigations with multi-modal recording: simultaneous fast-scan cyclic voltammetry, iontophoresis, and patch clamp measurements.

    PubMed

    Kirkpatrick, D C; McKinney, C J; Manis, P B; Wightman, R M

    2016-08-01

    Multi-modal recording describes the simultaneous collection of information across distinct domains. Compared to isolated measurements, such studies can more easily determine relationships between varieties of phenomena. This is useful for neurochemical investigations which examine cellular activity in response to changes in the local chemical environment. In this study, we demonstrate a method to perform simultaneous patch clamp measurements with fast-scan cyclic voltammetry (FSCV) using optically isolated instrumentation. A model circuit simulating concurrent measurements was used to predict the electrical interference between instruments. No significant impact was anticipated between methods, and predictions were largely confirmed experimentally. One exception was due to capacitive coupling of the FSCV potential waveform into the patch clamp amplifier. However, capacitive transients measured in whole-cell current clamp recordings were well below the level of biological signals, which allowed the activity of cells to be easily determined. Next, the activity of medium spiny neurons (MSNs) was examined in the presence of an FSCV electrode to determine how the exogenous potential impacted nearby cells. The activities of both resting and active MSNs were unaffected by the FSCV waveform. Additionally, application of an iontophoretic current, used to locally deliver drugs and other neurochemicals, did not affect neighboring cells. Finally, MSN activity was monitored during iontophoretic delivery of glutamate, an excitatory neurotransmitter. Membrane depolarization and cell firing were observed concurrently with chemical changes around the cell resulting from delivery. In all, we show how combined electrophysiological and electrochemical measurements can relate information between domains and increase the power of neurochemical investigations. PMID:27314130

  6. Quantitative multi-modal MRI of the Hippocampus and cognitive ability in community-dwelling older subjects.

    PubMed

    Aribisala, Benjamin S; Royle, Natalie A; Maniega, Susana Muñoz; Valdés Hernández, Maria C; Murray, Catherine; Penke, Lars; Gow, Alan; Starr, John M; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M

    2014-04-01

    Hippocampal structural integrity is commonly quantified using volumetric measurements derived from brain magnetic resonance imaging (MRI). Previously reported associations with cognitive decline have not been consistent. We investigate hippocampal integrity using quantitative MRI techniques and its association with cognitive abilities in older age. Participants from the Lothian Birth Cohort 1936 underwent brain MRI at mean age 73 years. Longitudinal relaxation time (T1), magnetization transfer ratio (MTR), fractional anisotropy (FA) and mean diffusivity (MD) were measured in the hippocampus. General factors of fluid-type intelligence (g), cognitive processing speed (speed) and memory were obtained at age 73 years, as well as childhood IQ test results at age 11 years. Amongst 565 older adults, multivariate linear regression showed that, after correcting for ICV, gender and age 11 IQ, larger left hippocampal volume was significantly associated with better memory ability (β = .11, p = .003), but not with speed or g. Using quantitative MRI and after correcting for multiple testing, higher T1 and MD were significantly associated with lower scores of g (β range = -.11 to -.14, p < .001), speed (β range = -.15 to -.20, p < .001) and memory (β range = -.10 to -.12, p < .001). Higher MTR and FA in the hippocampus were also significantly associated with higher scores of g (β range = .17 to .18, p < .0001) and speed (β range = .10 to .15, p < .0001), but not memory. Quantitative multi-modal MRI assessments were more sensitive at detecting cognition-hippocampal integrity associations than volumetric measurements, resulting in stronger associations between MRI biomarkers and age-related cognition changes. PMID:24561387

  7. Multi-modal data fusion using source separation: Two effective models based on ICA and IVA and their properties

    PubMed Central

    Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.

    2015-01-01

    Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830

  8. Benefits of multi-session balance and gait training with multi-modal biofeedback in healthy older adults.

    PubMed

    Lim, Shannon B; Horslen, Brian C; Davis, Justin R; Allum, John H J; Carpenter, Mark G

    2016-06-01

    Real-time balance-relevant biofeedback from a wearable sensor can improve balance in many patient populations, however, it is unknown if balance training with biofeedback has lasting benefits for healthy older adults once training is completed and biofeedback removed. This study was designed to determine if multi-session balance training with and without biofeedback leads to changes in balance performance in healthy older adults; and if changes persist after training. 36 participants (age 60-88) were randomly divided into two groups. Both groups trained on seven stance and gait tasks for 2 consecutive weeks (3×/week) while trunk angular sway and task duration were monitored. One group received real-time multi-modal biofeedback of trunk sway and a control group trained without biofeedback. Training effects were assessed at the last training session, with biofeedback available to the feedback group. Post-training effects (without biofeedback) were assessed immediately after, 1-week, and 1-month post-training. Both groups demonstrated training effects; participants swayed less when standing on foam with eyes closed (EC), maintained tandem-stance EC longer, and completed 8 tandem-steps EC faster and with less sway at the last training session. Changes in sway and duration, indicative of faster walking, were also observed after training for other gait tasks. While changes in walking speed persisted post-training, few other post-training effects were observed. These data suggest there is little added benefit to balance training with biofeedback, beyond training without, in healthy older adults. However, transient use of wearable balance biofeedback systems as balance aides remains beneficial for challenging balance situations and some clinical populations. PMID:27264396

  9. Experimental verification of a novel MEMS multi-modal vibration energy harvester for ultra-low power remote sensing nodes

    NASA Astrophysics Data System (ADS)

    Iannacci, J.; Sordo, G.; Serra, E.; Kucera, M.; Schmid, U.

    2015-05-01

    In this work, we discuss the verification and preliminary experimental characterization of a MEMS-based vibration Energy Harvester (EH) design. The device, named Four-Leaf Clover (FLC), is based on a circular-shaped mechanical resonator with four petal-like mass-spring cascaded systems. This solution introduces several mechanical Degrees of Freedom (DOFs), and therefore enables multiple resonant modes and deformation shapes in the vibrations frequency range of interest. The target is to realize a wideband multi-modal EH-MEMS device, that overcomes the typical narrowband working characteristics of standard cantilevered EHs, by ensuring flexible and adaptable power source to ultra-low power electronics for integrated remote sensing nodes (e.g. Wireless Sensor Networks - WSNs) in the Internet of Things (IoT) scenario, aiming to self-powered and energy autonomous smart systems. Finite Element Method simulations of the FLC EH-MEMS show the presence of several resonant modes for vibrations up to 4-5 kHz, and level of converted power up to a few μW at resonance and in closed-loop conditions (i.e. with resistive load). On the other hand, the first experimental tests of FLC fabricated samples, conducted with a Laser Doppler Vibrometer (LDV), proved the presence of several resonant modes, and allowed to validate the accuracy of the FEM modeling method. Such a good accordance holds validity for what concerns the coupled field behavior of the FLC EH-MEMS, as well. Both measurements and simulations performed at 190 Hz (i.e. out of resonance) showed the generation of power in the range of nW (Root Mean Square - RMS values). Further steps of this work will include the experimental characterization in a full range of vibrations, aiming to prove the whole functionality of the FLC EH-MEMS proposed design concept.

  10. Multi-Modal Homing in Sea Turtles: Modeling Dual Use of Geomagnetic and Chemical Cues in Island-Finding.

    PubMed

    Endres, Courtney S; Putman, Nathan F; Ernst, David A; Kurth, Jessica A; Lohmann, Catherine M F; Lohmann, Kenneth J

    2016-01-01

    Sea turtles are capable of navigating across large expanses of ocean to arrive at remote islands for nesting, but how they do so has remained enigmatic. An interesting example involves green turtles (Chelonia mydas) that nest on Ascension Island, a tiny land mass located approximately 2000 km from the turtles' foraging grounds along the coast of Brazil. Sensory cues that turtles are known to detect, and which might hypothetically be used to help locate Ascension Island, include the geomagnetic field, airborne odorants, and waterborne odorants. One possibility is that turtles use magnetic cues to arrive in the vicinity of the island, then use chemical cues to pinpoint its location. As a first step toward investigating this hypothesis, we used oceanic, atmospheric, and geomagnetic models to assess whether magnetic and chemical cues might plausibly be used by turtles to locate Ascension Island. Results suggest that waterborne and airborne odorants alone are insufficient to guide turtles from Brazil to Ascension, but might permit localization of the island once turtles arrive in its vicinity. By contrast, magnetic cues might lead turtles into the vicinity of the island, but would not typically permit its localization because the field shifts gradually over time. Simulations reveal, however, that the sequential use of magnetic and chemical cues can potentially provide a robust navigational strategy for locating Ascension Island. Specifically, one strategy that appears viable is following a magnetic isoline into the vicinity of Ascension Island until an odor plume emanating from the island is encountered, after which turtles might either: (1) initiate a search strategy; or (2) follow the plume to its island source. These findings are consistent with the hypothesis that sea turtles, and perhaps other marine animals, use a multi-modal navigational strategy for locating remote islands. PMID:26941625

  11. Predictive Markers for AD in a Multi-Modality Framework: An Analysis of MCI Progression in the ADNI Population

    PubMed Central

    Hinrichs, Chris; Singh, Vikas; Xu, Guofan; Johnson, Sterling C.

    2011-01-01

    Alzheimer’s Disease (AD) and other neurodegenerative diseases affect over 20 million people worldwide, and this number is projected to significantly increase in the coming decades. Proposed imaging-based markers have shown steadily improving levels of sensitivity/specificity in classifying individual subjects as AD or normal. Several of these efforts have utilized statistical machine learning techniques, using brain images as input, as means of deriving such AD-related markers. A common characteristic of this line of research is a focus on either (1) using a single imaging modality for classification, or (2) incorporating several modalities, but reporting separate results for each. One strategy to improve on the success of these methods is to leverage all available imaging modalities together in a single automated learning framework. The rationale is that some subjects may show signs of pathology in one modality but not in another – by combining all available images a clearer view of the progression of disease pathology will emerge. Our method is based on the Multi-Kernel Learning (MKL) framework, which allows the inclusion of an arbitrary number of views of the data in a maximum margin, kernel learning framework. The principal innovation behind MKL is that it learns an optimal combination of kernel (similarity) matrices while simultaneously training a classifier. In classification experiments MKL outperformed an SVM trained on all available features by 3% – 4%. We are especially interested in whether such markers are capable of identifying early signs of the disease. To address this question, we have examined whether our multi-modal disease marker (MMDM) can predict conversion from Mild Cognitive Impairment (MCI) to AD. Our experiments reveal that this measure shows significant group differences between MCI subjects who progressed to AD, and those who remained stable for 3 years. These differences were most significant in MMDMs based on imaging data. We also

  12. Multi-Modal Homing in Sea Turtles: Modeling Dual Use of Geomagnetic and Chemical Cues in Island-Finding

    PubMed Central

    Endres, Courtney S.; Putman, Nathan F.; Ernst, David A.; Kurth, Jessica A.; Lohmann, Catherine M. F.; Lohmann, Kenneth J.

    2016-01-01

    Sea turtles are capable of navigating across large expanses of ocean to arrive at remote islands for nesting, but how they do so has remained enigmatic. An interesting example involves green turtles (Chelonia mydas) that nest on Ascension Island, a tiny land mass located approximately 2000 km from the turtles’ foraging grounds along the coast of Brazil. Sensory cues that turtles are known to detect, and which might hypothetically be used to help locate Ascension Island, include the geomagnetic field, airborne odorants, and waterborne odorants. One possibility is that turtles use magnetic cues to arrive in the vicinity of the island, then use chemical cues to pinpoint its location. As a first step toward investigating this hypothesis, we used oceanic, atmospheric, and geomagnetic models to assess whether magnetic and chemical cues might plausibly be used by turtles to locate Ascension Island. Results suggest that waterborne and airborne odorants alone are insufficient to guide turtles from Brazil to Ascension, but might permit localization of the island once turtles arrive in its vicinity. By contrast, magnetic cues might lead turtles into the vicinity of the island, but would not typically permit its localization because the field shifts gradually over time. Simulations reveal, however, that the sequential use of magnetic and chemical cues can potentially provide a robust navigational strategy for locating Ascension Island. Specifically, one strategy that appears viable is following a magnetic isoline into the vicinity of Ascension Island until an odor plume emanating from the island is encountered, after which turtles might either: (1) initiate a search strategy; or (2) follow the plume to its island source. These findings are consistent with the hypothesis that sea turtles, and perhaps other marine animals, use a multi-modal navigational strategy for locating remote islands. PMID:26941625

  13. A Framework for the Systematic Collection of Open Source Intelligence

    SciTech Connect

    Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  14. Trends and challenges in open source software (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Aylward, Stephen

    2013-10-01

    Over the past decade, the field of medical image analysis research has undergone a rapid evolution. It was a collection of disconnected efforts that were burdened by mundane coding and file I/O tasks. It is now a collaborative community that has embraced open-source software as a shared foundation, reducing mundane coding and I/O burdens, promoting replicable research, and accelerating the pace of research and product development. This talk will review the history and current state of open-source software in medical image analysis research, will discuss the role of intellectual property in research, and will present emerging trends and technologies relevant to the growing importance of open-source software.

  15. The 2015 Bioinformatics Open Source Conference (BOSC 2015)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J. A.; Lapp, Hilmar

    2016-01-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  16. Data from thermal testing of the Open Source Cryostage.

    PubMed

    Buch, Johannes Lørup; Ramløv, Hans

    2016-09-01

    The data presented here is related to the research article "An open source cryostage and software analysis method for detection of antifreeze activity" (Buch and Ramløv, 2016) [1]. The design of the Open Source Cryostage (OSC) is tested in terms of thermal limits, thermal efficiency and electrical efficiency. This article furthermore includes an overview of the electrical circuitry and a flowchart of the software program controlling the temperature of the OSC. The thermal efficiency data is presented here as degrees per volt and maximum cooling capacity. PMID:27508238

  17. Freeing Crop Genetics through the Open Source Seed Initiative

    PubMed Central

    Luby, Claire H.; Goldman, Irwin L.

    2016-01-01

    For millennia, seeds have been freely available to use for farming and plant breeding without restriction. Within the past century, however, intellectual property rights (IPRs) have threatened this tradition. In response, a movement has emerged to counter the trend toward increasing consolidation of control and ownership of plant germplasm. One effort, the Open Source Seed Initiative (OSSI, www.osseeds.org), aims to ensure access to crop genetic resources by embracing an open source mechanism that fosters exchange and innovation among farmers, plant breeders, and seed companies. Plant breeders across many sectors have taken the OSSI Pledge to create a protected commons of plant germplasm for future generations. PMID:27093567

  18. Open source and DIY hardware for DNA nanotechnology labs

    PubMed Central

    Damase, Tulsi R.; Stephens, Daniel; Spencer, Adam; Allen, Peter B.

    2015-01-01

    A set of instruments and specialized equipment is necessary to equip a laboratory to work with DNA. Reducing the barrier to entry for DNA manipulation should enable and encourage new labs to enter the field. We present three examples of open source/DIY technology with significantly reduced costs relative to commercial equipment. This includes a gel scanner, a horizontal PAGE gel mold, and a homogenizer for generating DNA-coated particles. The overall cost savings obtained by using open source/DIY equipment was between 50 and 90%. PMID:26457320

  19. Human genome and open source: balancing ethics and business.

    PubMed

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives. PMID:22984755

  20. Freeing Crop Genetics through the Open Source Seed Initiative.

    PubMed

    Luby, Claire H; Goldman, Irwin L

    2016-04-01

    For millennia, seeds have been freely available to use for farming and plant breeding without restriction. Within the past century, however, intellectual property rights (IPRs) have threatened this tradition. In response, a movement has emerged to counter the trend toward increasing consolidation of control and ownership of plant germplasm. One effort, the Open Source Seed Initiative (OSSI, www.osseeds.org), aims to ensure access to crop genetic resources by embracing an open source mechanism that fosters exchange and innovation among farmers, plant breeders, and seed companies. Plant breeders across many sectors have taken the OSSI Pledge to create a protected commons of plant germplasm for future generations. PMID:27093567

  1. TH-C-12A-12: Veritas: An Open Source Tool to Facilitate User Interaction with TrueBeam Developer Mode

    SciTech Connect

    Mishra, P; Lewis, J; Etmektzoglou, T; Svatos, M

    2014-06-15

    Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slots mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments

  2. A multicenter, cross-platform clinical validation study of cancer cytogenomic arrays.

    PubMed

    Li, Marilyn M; Monzon, Federico A; Biegel, Jaclyn A; Jobanputra, Vaidehi; Laffin, Jennifer J; Levy, Brynn; Leon, Annette; Miron, Patricia; Rossi, Michael R; Toruner, Gokce; Alvarez, Karla; Doho, Gregory; Dougherty, Margaret J; Hu, Xiaofeng; Kash, Shera; Streck, Deanna; Znoyko, Iya; Hagenkord, Jill M; Wolff, Daynna J

    2015-11-01

    Cytogenomic microarray analysis (CMA) offers high resolution, genome-wide copy number information and is widely used in clinical laboratories for diagnosis of constitutional abnormalities. The Cancer Genomics Consortium (CGC) conducted a multiplatform, multicenter clinical validation project to compare the reliability and inter- and intralaboratory reproducibility of this technology for clinical oncology applications. Four specimen types were processed on three different microarray platforms-from Affymetrix, Agilent, and Illumina. Each microarray platform was employed at two independent test sites. The results were compared in a blinded manner with current standard methods, including karyotype, FISH, or morphology. Twenty-nine chronic lymphocytic leukemia blood, 34 myelodysplastic syndrome bone marrow, and 30 fresh frozen renal epithelial tumor samples were assessed by all six laboratories. Thirty formalin fixed paraffin embedded renal tumor samples were analyzed at the Affymetrix and Agilent test sites only. All study samples were initial diagnostic samples. Array data were analyzed at each participating site and were submitted to caArray for central analysis. Laboratory interpretive results were submitted to the central analysis team for comparison with the standard-of-care assays and for calculation of intraplatform reproducibility and cross-platform concordance. The results demonstrated that the three microarray platforms 1) detect clinically actionable genomic changes in cancer compatible to standard-of-care methods; 2) further define cytogenetic aberrations; 3) identify submicroscopic alterations and loss of heterozygosity (LOH); and 4) yield consistent results within and between laboratories. Based on this study, the CGC concludes that CMA is a sensitive and reliable technique for copy number and LOH assessment that may be used for clinical oncology genomic analysis. PMID:26454669

  3. LipidXplorer: a software for consensual cross-platform lipidomics.

    PubMed

    Herzog, Ronny; Schuhmann, Kai; Schwudke, Dominik; Sampaio, Julio L; Bornstein, Stefan R; Schroeder, Michael; Shevchenko, Andrej

    2012-01-01

    LipidXplorer is the open source software that supports the quantitative characterization of complex lipidomes by interpreting large datasets of shotgun mass spectra. LipidXplorer processes spectra acquired on any type of tandem mass spectrometers; it identifies and quantifies molecular species of any ionizable lipid class by considering any known or assumed molecular fragmentation pathway independently of any resource of reference mass spectra. It also supports any shotgun profiling routine, from high throughput top-down screening for molecular diagnostic and biomarker discovery to the targeted absolute quantification of low abundant lipid species. Full documentation on installation and operation of LipidXplorer, including tutorial, collection of spectra interpretation scripts, FAQ and user forum are available through the wiki site at: https://wiki.mpi-cbg.de/wiki/lipidx/index.php/Main_Page. PMID:22272252

  4. The Value of Open Source Software Tools in Qualitative Research

    ERIC Educational Resources Information Center

    Greenberg, Gary

    2011-01-01

    In an era of global networks, researchers using qualitative methods must consider the impact of any software they use on the sharing of data and findings. In this essay, I identify researchers' main areas of concern regarding the use of qualitative software packages for research. I then examine how open source software tools, wherein the publisher…

  5. Color science demonstration kit from open source hardware and software

    NASA Astrophysics Data System (ADS)

    Zollers, Michael W.

    2014-09-01

    Color science is perhaps the most universally tangible discipline within the optical sciences for people of all ages. Excepting a small and relatively well-understood minority, we can see that the world around us consists of a multitude of colors; yet, describing the "what", "why", and "how" of these colors is not an easy task, especially without some sort of equally colorful visual aids. While static displays (e.g., poster boards, etc.) serve their purpose, there is a growing trend, aided by the recent permeation of small interactive devices into our society, for interactive and immersive learning. However, for the uninitiated, designing software and hardware for this purpose may not be within the purview of all optical scientists and engineers. Enter open source. Open source "anything" are those tools and designs -- hardware or software -- that are available and free to use, often without any restrictive licensing. Open source software may be familiar to some, but the open source hardware movement is relatively new. These are electronic circuit board designs that are provided for free and can be implemented in physical hardware by anyone. This movement has led to the availability of some relatively inexpensive, but quite capable, computing power for the creation of small devices. This paper will showcase the design and implementation of the software and hardware that was used to create an interactive demonstration kit for color. Its purpose is to introduce and demonstrate the concepts of color spectra, additive color, color rendering, and metamers.

  6. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  7. Higher Education Sub-Cultures and Open Source Adoption

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  8. Open source tools for ATR development and performance evaluation

    NASA Astrophysics Data System (ADS)

    Baumann, James M.; Dilsavor, Ronald L.; Stubbles, James; Mossing, John C.

    2002-07-01

    Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.

  9. Open Source Solutions for Libraries: ABCD vs Koha

    ERIC Educational Resources Information Center

    Macan, Bojan; Fernandez, Gladys Vanesa; Stojanovski, Jadranka

    2013-01-01

    Purpose: The purpose of this study is to present an overview of the two open source (OS) integrated library systems (ILS)--Koha and ABCD (ISIS family), to compare their "next-generation library catalog" functionalities, and to give comparison of other important features available through ILS modules. Design/methodology/approach: Two open source…

  10. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  11. Faculty/Student Surveys Using Open Source Software

    ERIC Educational Resources Information Center

    Kaceli, Sali

    2004-01-01

    This session will highlight an easy survey package which lets non-technical users create surveys, administer surveys, gather results, and view statistics. This is an open source application all managed online via a web browser. By using phpESP, the faculty is given the freedom of creating various surveys at their convenience and link them to their…

  12. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com. PMID:19225577

  13. Is Open Source the ERP Cure-All?

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  14. Critical Analysis on Open Source LMSs Using FCA

    ERIC Educational Resources Information Center

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  15. Open Source Software: Fully Featured vs. "The Devil You Know"

    ERIC Educational Resources Information Center

    Hotrum, Michael; Ludwig, Brian; Baggaley, Jon

    2005-01-01

    The "ILIAS" learning management system (LMS) was evaluated, following its favourable rating in an independent evaluation study of open source software (OSS) products. The current review found "ILIAS" to have numerous features of value to distance education (DE) students and teachers, as well as problems for consideration in the system's ongoing…

  16. Chinese Localisation of Evergreen: An Open Source Integrated Library System

    ERIC Educational Resources Information Center

    Zou, Qing; Liu, Guoying

    2009-01-01

    Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…

  17. Open Source Projects in Software Engineering Education: A Mapping Study

    ERIC Educational Resources Information Center

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  18. The Case for Open Source Software in Digital Forensics

    NASA Astrophysics Data System (ADS)

    Zanero, Stefano; Huebner, Ewa

    In this introductory chapter we discuss the importance of the use of open source software (OSS), and in particular of free software (FLOSS) in computer forensics investigations including the identification, capture, preservation and analysis of digital evidence; we also discuss the importance of OSS in computer forensics

  19. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  20. Open Source Drug Discovery in Practice: A Case Study

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Background Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. Methodology/Principal Findings A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Conclusions/Significance Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality

  1. NASA's Open Source Software for Serving and Viewing Global Imagery

    NASA Astrophysics Data System (ADS)

    Roberts, J. T.; Alarcon, C.; Boller, R. A.; Cechini, M. F.; Gunnoe, T.; Hall, J. R.; Huang, T.; Ilavajhala, S.; King, J.; McGann, M.; Murphy, K. J.; Plesea, L.; Schmaltz, J. E.; Thompson, C. K.

    2014-12-01

    The NASA Global Imagery Browse Services (GIBS), which provide open access to an enormous archive of historical and near real time imagery from NASA supported satellite instruments, has also released most of its software to the general public as open source. The software packages, originally developed at the Jet Propulsion Laboratory and Goddard Space Flight Center, currently include: 1) the Meta Raster Format (MRF) GDAL driver—GDAL support for a specialized file format used by GIBS to store imagery within a georeferenced tile pyramid for exceptionally fast access; 2) OnEarth—a high performance Apache module used to serve tiles from MRF files via common web service protocols; 3) Worldview—a web mapping client to interactively browse global, full-resolution satellite imagery and download underlying data. Examples that show developers how to use GIBS with various mapping libraries and programs are also available. This stack of tools is intended to provide an out-of-the-box solution for serving any georeferenced imagery.Scientists as well as the general public can use the open source software for their own applications such as developing visualization interfaces for improved scientific understanding and decision support, hosting a repository of browse images to help find and discover satellite data, or accessing large datasets of geo-located imagery in an efficient manner. Open source users may also contribute back to NASA and the wider Earth Science community by taking an active role in evaluating and developing the software.This presentation will discuss the experiences of developing the software in an open source environment and useful lessons learned. To access the open source software repositories, please visit: https://github.com/nasa-gibs/

  2. Open Source and ROI: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    A switch to free open source software can minimize cost and allow funding to be diverted to equipment and other programs. For instance, the OpenOffice suite is an alternative to expensive basic application programs offered by major vendors. Many such programs on the market offer features seldom used in education but for which educators must pay.…

  3. Transforming High School Classrooms with Free/Open Source Software: "It's Time for an Open Source Software Revolution"

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2008-01-01

    Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…

  4. Beyond Open Source: According to Jim Hirsch, Open Technology, Not Open Source, Is the Wave of the Future

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    This article presents an interview with Jim Hirsch, an associate superintendent for technology at Piano Independent School District in Piano, Texas. Hirsch serves as a liaison for the open technologies committee of the Consortium for School Networking. In this interview, he shares his opinion on the significance of open source in K-12.

  5. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases

    PubMed Central

    Forbes, Jessica L.; Kim, Regina E. Y.; Paulsen, Jane S.; Johnson, Hans J.

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%. PMID:27536233

  6. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases.

    PubMed

    Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%. PMID:27536233

  7. Long-term quality of life after intensified multi-modality treatment of oral cancer including intra-arterial induction chemotherapy and adjuvant chemoradiation

    PubMed Central

    Kovács, Adorján F.; Stefenelli, Ulrich; Thorn, Gerrit

    2015-01-01

    Background: Quality of life (QoL) studies are well established when accompanying trials in head and neck cancer, but studies on long-term survivors are rare. Aims: The aim was to evaluate long-term follow-up patients treated with an intensified multi-modality therapy. Setting and Design: Cross-sectional study, tertiary care center. Patients and Methods: A total of 135 oral/oropharyngeal cancer survivors having been treated with an effective four modality treatment (intra-arterial induction chemotherapy, radical surgery, adjuvant radiation, concurrent systemic chemotherapy) filled European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 and HN35 questionnaires. Mean distance to treatment was 6.1 (1.3–16.6) years. Results were compared with a reference patient population (EORTC reference manual). In-study group comparison was also carried out. Statistical Analysis: One-sample t-test, Mann–Whitney-test, Kruskal–Wallis analysis. Results: QoL scores of both populations were well comparable. Global health status, cognitive and social functioning, fatigue, social eating, status of teeth, mouth opening and dryness, and sticky saliva were significantly worse in the study population; pain and need for pain killers, cough, need for nutritional support, problems with weight loss and gain were judged to be significantly less. Patients 1-year posttreatment had generally worse scores as compared to patients with two or more years distance to treatment. Complex reconstructive measures and adjuvant (chemo) radiation were main reasons for significant impairment of QoL. Conclusion Subjective disease status of patients following a maximized multi-modality treatment showed an expectable high degree of limitations, but was generally comparable to a reference group treated less intensively, suggesting that the administration of an intensified multi-modality treatment is feasible in terms of QoL/effectivity ratio. PMID:26389030

  8. Long distance education for croatian nurses with open source software.

    PubMed

    Radenovic, Aleksandar; Kalauz, Sonja

    2006-01-01

    Croatian Nursing Informatics Association (CNIA) has been established as result of continuing work on promoting nursing informatics in Croatia. Main goals of CNIA are promoting nursing informatics and education of nurses about nursing informatics and using information technology in nursing process. CNIA in start of work is developed three courses from nursing informatics all designed with support of long distance education with open source software. Courses are: A - 'From Data to Wisdom', B - 'Introduction to Nursing Informatics' and C - 'Nursing Informatics I'. Courses A and B are obligatory for C course. Technology used to implement these online courses is based on the open source Learning Management System (LMS), Claroline, free online collaborative learning platform. Courses are divided in two modules/days. First module/day participants have classical approach to education and second day with E-learning from home. These courses represent first courses from nursing informatics' and first long distance education for nurses also. PMID:17102315

  9. OPERA: Open-source Pipeline for Espadons Reduction and Analysis

    NASA Astrophysics Data System (ADS)

    Teeple, Douglas

    2014-11-01

    OPERA (Open-source Pipeline for Espadons Reduction and Analysis) is an open-source collaborative software reduction pipeline for ESPaDOnS data. ESPaDOnS is a bench-mounted high-resolution echelle spectrograph and spectro-polarimeter designed to obtain a complete optical spectrum (from 370 to 1,050 nm) in a single exposure with a mode-dependent resolving power between 68,000 and 81,000. OPERA is fully automated, calibrates on two-dimensional images and reduces data to produce one-dimensional intensity and polarimetric spectra. Spectra are extracted using an optimal extraction algorithm. Though designed for CFHT ESPaDOnS data, the pipeline is extensible to other echelle spectrographs.

  10. Open Source, Open Standards, and Health Care Information Systems

    PubMed Central

    2011-01-01

    Recognition of the improvements in patient safety, quality of patient care, and efficiency that health care information systems have the potential to bring has led to significant investment. Globally the sale of health care information systems now represents a multibillion dollar industry. As policy makers, health care professionals, and patients, we have a responsibility to maximize the return on this investment. To this end we analyze alternative licensing and software development models, as well as the role of standards. We describe how licensing affects development. We argue for the superiority of open source licensing to promote safer, more effective health care information systems. We claim that open source licensing in health care information systems is essential to rational procurement strategy. PMID:21447469

  11. An open-source, automated platform for visualizing subdural electrodes using 3D CT-MRI coregistration

    PubMed Central

    Pearce, Allison; Krish, Veena T.; Wagenaar, Joost; Chen, Weixuan; Zheng, Yuanjie; Wang, Hongzhi; Lucas, Timothy H.; Gee, James C.; Litt, Brian; Davis, Kathryn A.

    2014-01-01

    Objective Visualizing implanted subdural electrodes in 3D space can greatly aid planning, executing, and validating resection in epilepsy surgery. Coregistration software is available, but cost, complexity, insufficient accuracy or validation limit adoption. We present a fully automated open-source application, based upon a novel method using post-implant CT and post-implant MR images, for accurately visualizing intracranial electrodes in 3D space. Methods CT-MR rigid brain coregistration, MR non-rigid registration, and prior-based segmentation were carried out on 7 subjects. Post-implant CT, post-implant MR, and an external labeled atlas were then aligned in the same space. The coregistration algorithm was validated by manually marking identical anatomical landmarks on the post-implant CT and post-implant MR images. Following coregistration, distances between the center of the landmark masks on the post-implant MR and the coregistered CT images were calculated for all subjects. Algorithms were implemented in open-source software and translated into a “drag and drop” desktop application for Apple Mac OS X. Results Despite post-operative brain deformation, the method was able to automatically align intra-subject multi-modal images and segment cortical subregions so that all electrodes could be visualized on the parcellated brain. Manual marking of anatomical landmarks validated the coregistration algorithm with a mean misalignment distance of 2.87 ± 0.58 mm between the landmarks. Software was easily used by operators without prior image processing experience. Significance We demonstrate an easy to use, novel platform for accurately visualizing subdural electrodes in 3D space on a parcellated brain. We rigorously validated this method using quantitative measures. The method is unique because it involves no pre-processing, is fully automated, and freely available worldwide. A desktop application, as well as the source code, are both available for download on the

  12. ORBKIT: A modular python toolbox for cross-platform postprocessing of quantum chemical wavefunction data.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe; Paulus, Beate; Hege, Hans-Christian; Schild, Axel

    2016-06-15

    ORBKIT is a toolbox for postprocessing electronic structure calculations based on a highly modular and portable Python architecture. The program allows computing a multitude of electronic properties of molecular systems on arbitrary spatial grids from the basis set representation of its electronic wavefunction, as well as several grid-independent properties. The required data can be extracted directly from the standard output of a large number of quantum chemistry programs. ORBKIT can be used as a standalone program to determine standard quantities, for example, the electron density, molecular orbitals, and derivatives thereof. The cornerstone of ORBKIT is its modular structure. The existing basic functions can be arranged in an individual way and can be easily extended by user-written modules to determine any other derived quantity. ORBKIT offers multiple output formats that can be processed by common visualization tools (VMD, Molden, etc.). Additionally, ORBKIT possesses routines to order molecular orbitals computed at different nuclear configurations according to their electronic character and to interpolate the wavefunction between these configurations. The program is open-source under GNU-LGPLv3 license and freely available at https://github.com/orbkit/orbkit/. This article provides an overview of ORBKIT with particular focus on its capabilities and applicability, and includes several example calculations. © 2016 Wiley Periodicals, Inc. PMID:27043934

  13. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment

    PubMed Central

    Conklin, Emily E.; Lee, Kathyann L.; Schlabach, Sadie A.; Woods, Ian G.

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs. PMID:26240518

  14. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs. PMID:26240518

  15. Application note : using open source schematic capture tools with Xyce.

    SciTech Connect

    Russo, Thomas V.

    2013-08-01

    The development of the XyceTM Parallel Electronic Simulator has focused entirely on the creation of a fast, scalable simulation tool, and has not included any schematic capture or data visualization tools. This application note will describe how to use the open source schematic capture tool gschem and its associated netlist creation tool gnetlist to create basic circuit designs for Xyce, and how to access advanced features of Xyce that are not directly supported by either gschem or gnetlist.

  16. An open source model for open access journal publication.

    PubMed

    Blesius, Carl R; Williams, Michael A; Holzbach, Ana; Huntley, Arthur C; Chueh, Henry

    2005-01-01

    We describe an electronic journal publication infrastructure that allows a flexible publication workflow, academic exchange around different forms of user submissions, and the exchange of articles between publishers and archives using a common XML based standard. This web-based application is implemented on a freely available open source software stack. This publication demonstrates the Dermatology Online Journal's use of the platform for non-biased independent open access publication. PMID:16779183

  17. GISCube, an Open Source Web-based GIS Application

    NASA Astrophysics Data System (ADS)

    Boustani, M.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    There are many Earth science projects and data systems being developed at the Jet Propulsion Laboratory, California Institute of Technology (JPL) that require the use of Geographic Information Systems (GIS). Three in particular are: (1) the JPL Airborne Snow Observatory (ASO) that measures the amount of water being generated from snow melt in mountains; (2) the Regional Climate Model Evaluation System (RCMES) that compares climate model outputs with remote sensing datasets in the context of model evaluation and the Intergovernmental Panel on Climate Change and for the U.S. National Climate Assessment and; (3) the JPL Snow Server that produces a snow and ice climatology for the Western US and Alaska, for the U.S. National Climate Assessment. Each of these three examples and all other earth science projects are strongly in need of having GIS and geoprocessing capabilities to process, visualize, manage and store GeoSpatial data. Beside some open source GIS libraries and some software like ArcGIS there are comparatively few open source, web-based and easy to use application that are capable of doing GIS processing and visualization. To address this, we present GISCube, an open source web-based GIS application that can store, visualize and process GIS and GeoSpatial data. GISCube is powered by Geothon, an open source python GIS cookbook. Geothon has a variety of Geoprocessing tools such data conversion, processing, spatial analysis and data management tools. GISCube has the capability of supporting a variety of well known GIS data formats in both vector and raster formats, and the system is being expanded to support NASA's and scientific data formats such as netCDF and HDF files. In this talk, we demonstrate how Earth science and other projects can benefit by using GISCube and Geothon, its current goals and our future work in the area.

  18. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  19. How Open Source Can Still Save the World

    NASA Astrophysics Data System (ADS)

    Behlendorf, Brian

    Many of the worlds’ major problems - economic distress, natural disaster responses, broken health care systems, education crises, and more - are not fundamentally information technology issues. However, in every case mentioned and more, there exist opportunities for Open Source software to uniquely change the way we can address these problems. At times this is about addressing a need for which no sufficient commercial market exists. For others, it is in the way Open Source licenses free the recipient from obligations to the creators, creating a relationship of mutual empowerment rather than one of dependency. For yet others, it is in the way the open collaborative processes that form around Open Source software provide a neutral ground for otherwise competitive parties to find a greatest common set of mutual needs to address together rather than in parallel. Several examples of such software exist today and are gaining traction. Governments, NGOs, and businesses are beginning to recognize the potential and are organizing to meet it. How far can this be taken?

  20. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave

    PubMed Central

    Oosterhof, Nikolaas N.; Connolly, Andrew C.; Haxby, James V.

    2016-01-01

    SMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA PMID:27499741

  1. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave.

    PubMed

    Oosterhof, Nikolaas N; Connolly, Andrew C; Haxby, James V

    2016-01-01

    SMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA. PMID:27499741

  2. Energy Logic (EL): a novel fusion engine of multi-modality multi-agent data/information fusion for intelligent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2009-04-01

    The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.

  3. Two Phase Non-Rigid Multi-Modal Image Registration Using Weber Local Descriptor-Based Similarity Metrics and Normalized Mutual Information

    PubMed Central

    Yang, Feng; Ding, Mingyue; Zhang, Xuming; Wu, Yi; Hu, Jiani

    2013-01-01

    Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI) and sum of squared differences (SSD) cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD) based similarity metrics with the normalized mutual information (NMI) using the diffeomorphic free-form deformation (FFD) model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD) or weighted structural similarity (wldWSSIM). Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI), the SSD on entropy images (ESSD) and the ESSD-NMI in terms of registration accuracy and computation efficiency. PMID:23765270

  4. Nowcasting influenza outbreaks using open-source media report.

    SciTech Connect

    Ray, Jaideep; Brownstein, John S.

    2013-02-01

    We construct and verify a statistical method to nowcast influenza activity from a time-series of the frequency of reports concerning influenza related topics. Such reports are published electronically by both public health organizations as well as newspapers/media sources, and thus can be harvested easily via web crawlers. Since media reports are timely, whereas reports from public health organization are delayed by at least two weeks, using timely, open-source data to compensate for the lag in %E2%80%9Cofficial%E2%80%9D reports can be useful. We use morbidity data from networks of sentinel physicians (both the Center of Disease Control's ILINet and France's Sentinelles network) as the gold standard of influenza-like illness (ILI) activity. The time-series of media reports is obtained from HealthMap (http://healthmap.org). We find that the time-series of media reports shows some correlation ( 0.5) with ILI activity; further, this can be leveraged into an autoregressive moving average model with exogenous inputs (ARMAX model) to nowcast ILI activity. We find that the ARMAX models have more predictive skill compared to autoregressive (AR) models fitted to ILI data i.e., it is possible to exploit the information content in the open-source data. We also find that when the open-source data are non-informative, the ARMAX models reproduce the performance of AR models. The statistical models are tested on data from the 2009 swine-flu outbreak as well as the mild 2011-2012 influenza season in the U.S.A.

  5. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  6. OpenStudio: An Open Source Integrated Analysis Platform; Preprint

    SciTech Connect

    Guglielmetti, R.; Macumber, D.; Long, N.

    2011-12-01

    High-performance buildings require an integrated design approach for all systems to work together optimally; systems integration needs to be incorporated in the earliest stages of design for efforts to be cost and energy-use effective. Building designers need a full-featured software framework to support rigorous, multidisciplinary building simulation. An open source framework - the OpenStudio Software Development Kit (SDK) - is being developed to address this need. In this paper, we discuss the needs that drive OpenStudio's system architecture and goals, provide a development status report (the SDK is currently in alpha release), and present a brief case study that illustrates its utility and flexibility.

  7. Patient Access to Their Health Record Using Open Source EHR.

    PubMed

    Chelsom, John; Dogar, Naveed

    2015-01-01

    In both Europe and North America, patients are beginning to gain access to their health records in electronic form. Using the open source cityEHR as an example, we have focussed on the needs of clinical users to gather requirements for patient access and have implemented these requirements in a new application called cityEHR-PA. The development of a separate application for patient access was necessary to address requirements for security and ease of use. The use of open standards throughout the design of the EHR allows the possibility of third parties to develop applications for patient access, consuming the individual patient record extracted from the full EHR. PMID:25676956

  8. Open Source Next Generation Visualization Software for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Rinker, George

    2016-01-01

    Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).

  9. Open-source, Rapid Reporting of Dementia Evaluations

    PubMed Central

    Graves, Rasinio S.; Mahnken, Jonathan D.; Swerdlow, Russell H.; Burns, Jeffrey M.; Price, Cathy; Amstein, Brad; Hunt, Suzanne L; Brown, Lexi; Adagarla, Bhargav; Vidoni, Eric D.

    2016-01-01

    The National Institutes of Health Alzheimer's Disease Center consortium requires member institutions to build and maintain a longitudinally characterized cohort with a uniform standard data set. Increasingly, centers are employing electronic data capture to acquire data at annual evaluations. In this paper, the University of Kansas Alzheimer's Disease Center reports on an open-source system of electronic data collection and reporting to improve efficiency. This Center capitalizes on the speed, flexibility and accessibility of the system to enhance the evaluation process while rapidly transferring data to the National Alzheimer's Coordinating Center. This framework holds promise for other consortia that regularly use and manage large, standardized datasets. PMID:26779306

  10. Open source high performance floating-point modules.

    SciTech Connect

    Underwood, Keith Douglas

    2006-02-01

    Given the logic density of modern FPGAs, it is feasible to use FPGAs for floating-point applications. However, it is important that any floating-point units that are used be highly optimized. This paper introduces an open source library of highly optimized floating-point units for Xilinx FPGAs. The units are fully IEEE compliant and achieve approximately 230 MHz operation frequency for double-precision add and multiply in a Xilinx Virtex-2-Pro FPGA (-7 speed grade). This speed is achieved with a 10 stage adder pipeline and a 12 stage multiplier pipeline. The area requirement is 571 slices for the adder and 905 slices for the multiplier.

  11. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  12. Open source data analysis and visualization software for optical engineering

    NASA Astrophysics Data System (ADS)

    Smith, Greg A.; Lewis, Benjamin J.; Palmer, Michael; Kim, Dae Wook; Loeff, Adrian R.; Burge, James H.

    2012-10-01

    SAGUARO is open-source software developed to simplify data assimilation, analysis, and visualization by providing a single framework for disparate data sources from raw hardware measurements to optical simulation output. Developed with a user-friendly graphical interface in the MATLABTM environment, SAGUARO is intended to be easy for the enduser in search of useful optical information as well as the developer wanting to add new modules and functionalities. We present here the flexibility of the SAGUARO software and discuss how it can be applied to the wider optical engineering community.

  13. Implementing Open Source Platform for Education Quality Enhancement in Primary Education: Indonesia Experience

    ERIC Educational Resources Information Center

    Kisworo, Marsudi Wahyu

    2016-01-01

    Information and Communication Technology (ICT)-supported learning using free and open source platform draws little attention as open source initiatives were focused in secondary or tertiary educations. This study investigates possibilities of ICT-supported learning using open source platform for primary educations. The data of this study is taken…

  14. Open Access, Open Source and Digital Libraries: A Current Trend in University Libraries around the World

    ERIC Educational Resources Information Center

    Krishnamurthy, M.

    2008-01-01

    Purpose: The purpose of this paper is to describe the open access and open source movement in the digital library world. Design/methodology/approach: A review of key developments in the open access and open source movement is provided. Findings: Open source software and open access to research findings are of great use to scholars in developing…

  15. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    NASA Astrophysics Data System (ADS)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  16. Ambit-Tautomer: An Open Source Tool for Tautomer Generation.

    PubMed

    Kochev, Nikolay T; Paskaleva, Vesselina H; Jeliazkova, Nina

    2013-06-01

    We present a new open source tool for automatic generation of all tautomeric forms of a given organic compound. Ambit-Tautomer is a part of the open source software package Ambit2. It implements three tautomer generation algorithms: combinatorial method, improved combinatorial method and incremental depth-first search algorithm. All algorithms utilize a set of fully customizable rules for tautomeric transformations. The predefined knowledge base covers 1-3, 1-5 and 1-7 proton tautomeric shifts. Some typical supported tautomerism rules are keto-enol, imin-amin, nitroso-oxime, azo-hydrazone, thioketo-thioenol, thionitroso-thiooxime, amidine-imidine, diazoamino-diazoamino, thioamide-iminothiol and nitrosamine-diazohydroxide. Ambit-Tautomer uses a simple energy based system for tautomer ranking implemented by a set of empirically derived rules. A fine-grained output control is achieved by a set of post-generation filters. We performed an exhaustive comparison of the Ambit-Tautomer Incremental algorithm against several other software packages which offer tautomer generation: ChemAxon Marvin, Molecular Networks MN.TAUTOMER, ACDLabs, CACTVS and the CDK implementation of the algorithm, based on the mobile H atoms listed in the InChI. According to the presented test results, Ambit-Tautomer's performance is either comparable to or better than the competing algorithms. Ambit-Tautomer module is available for download as a Java library, a command line application, a demo web page or OpenTox API compatible Web service. PMID:27481667

  17. Open Source Quartz Crystal Microbalance with dissipation monitoring

    NASA Astrophysics Data System (ADS)

    Mista, C.; Zalazar, M.; Peñalva, A.; Martina, M.; Reta, J. M.

    2016-04-01

    The dissipation factor and subsequently the characterization of the viscoelasticity of deposition films have become crucial for the study of biomolecular adsorption. Most of the commercial quartz crystal microbalance (QCM) systems offer this feature, but it has not been incorporated in open source systems. This article describes the design, construction, and simulation of an open source QCM module for measuring dissipation factor. The module includes two blocks: switch and envelope detector. The switch rapidly disrupts the excitation of the crystal, and connects the output to the envelope detector which demodulates the amplitude of the signal. Damped sinusoidal exponential signals with different time constant were used for simulating viscosity interfaces. The incorporation of few elements facilitated a double-faced PCB design with reduced dimensions. The results from simulation show that the system has a good performance in the range of the biomolecular processes; greater relative error are observed for time constant lower than 1 us. In conclusion, a dissipation module has been developed for calculate dissipation factor using QCM, which is compact and shows great performance for use in biomolecular adsorption.

  18. Clarity: an open-source manager for laboratory automation.

    PubMed

    Delaney, Nigel F; Rojas Echenique, José I; Marx, Christopher J

    2013-04-01

    Software to manage automated laboratories, when interfaced with hardware instruments, gives users a way to specify experimental protocols and schedule activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity, a laboratory automation manager that is hardware agnostic, portable, extensible, and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity, demonstrate an example of its implementation for the automated analysis of bacterial growth, and describe how the program can be extended to manage new hardware. Clarity is mature, well documented, actively developed, written in C# for the Common Language Infrastructure, and is free and open-source software. These advantages set Clarity apart from currently available laboratory automation programs. The source code and documentation for Clarity is available at http://code.google.com/p/osla/. PMID:23032169

  19. Building integrated business environments: analysing open-source ESB

    NASA Astrophysics Data System (ADS)

    Martínez-Carreras, M. A.; García Jimenez, F. J.; Gómez Skarmeta, A. F.

    2015-05-01

    Integration and interoperability are two concepts that have gained significant prominence in the business field, providing tools which enable enterprise application integration (EAI). In this sense, enterprise service bus (ESB) has played a crucial role as the underpinning technology for creating integrated environments in which companies may connect all their legacy-applications. However, the potential of these technologies remains unknown and some important features are not used to develop suitable business environments. The aim of this paper is to describe and detail the elements for building the next generation of integrated business environments (IBE) and to analyse the features of ESBs as the core of this infrastructure. For this purpose, we evaluate how well-known open-source ESB products fulfil these needs. Moreover, we introduce a scenario in which the collaborative system 'Alfresco' is integrated in the business infrastructure. Finally, we provide a comparison of the different open-source ESBs available for IBE requirements. According to this study, Fuse ESB provides the best results, considering features such as support for a wide variety of standards and specifications, documentation and implementation, security, advanced business trends, ease of integration and performance.

  20. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project. PMID:26668933

  1. Open source projects in software engineering education: a mapping study

    NASA Astrophysics Data System (ADS)

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study aims to summarize the literature on how OSP have been used to facilitate students' learning of SE. Method: A systematic mapping study was undertaken by identifying, filtering and classifying primary studies using a predefined strategy. Results: 72 papers were selected and classified. The main results were: (a) most studies focused on comprehensive SE courses, although some dealt with specific areas; (b) the most prevalent approach was the traditional project method; (c) studies' general goals were: learning SE concepts and principles by using OSP, learning open source software or both; (d) most studies tried out ideas in regular courses within the curriculum; (e) in general, students had to work with predefined projects; (f) there was a balance between approaches where instructors had either inside control or no control on the activities performed by students; (g) when learning was assessed, software artefacts, reports and presentations were the main instruments used by teachers, while surveys were widely used for students' self-assessment; (h) most studies were published in the last seven years. Conclusions: The resulting map gives an overview of the existing initiatives in this context and shows gaps where further research can be pursued.

  2. Use of open source distribution for a machine tool controller

    NASA Astrophysics Data System (ADS)

    Shackleford, William P.; Proctor, Frederick M.

    2001-02-01

    In recent years a growing number of government and university las, non-profit organizations and even a few for- profit corporations have found that making their source code public is good for both developers and users. In machine tool control, a growing number of users are demanding that the controllers they buy be `open architecture,' which would allow third parties and end-users at least limited ability to modify, extend or replace the components of that controller. This paper examines the advantages and dangers of going one step further, and providing `open source' controllers by relating the experiences of users and developers of the Enhanced Machine Controller. We also examine some implications for the development of standards for open-architecture but closed-source controllers. Some of the questions we hope to answer include: How can the quality be maintained after the source code has been modified? Can the code be trusted to run on expensive machines and parts, or when the safety of the operator is an issue? Can `open- architecture' but closed-source controllers ever achieve the level of flexibility or extensibility that open-source controllers can?

  3. Building an Open Source Framework for Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Jagers, B.; Meijers, E.; Villars, M.

    2015-12-01

    In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.

  4. Final report for LDRD project 11-0029 : high-interest event detection in large-scale multi-modal data sets : proof of concept.

    SciTech Connect

    Rohrer, Brandon Robinson

    2011-09-01

    Events of interest to data analysts are sometimes difficult to characterize in detail. Rather, they consist of anomalies, events that are unpredicted, unusual, or otherwise incongruent. The purpose of this LDRD was to test the hypothesis that a biologically-inspired anomaly detection algorithm could be used to detect contextual, multi-modal anomalies. There currently is no other solution to this problem, but the existence of a solution would have a great national security impact. The technical focus of this research was the application of a brain-emulating cognition and control architecture (BECCA) to the problem of anomaly detection. One aspect of BECCA in particular was discovered to be critical to improved anomaly detection capabilities: it's feature creator. During the course of this project the feature creator was developed and tested against multiple data types. Development direction was drawn from psychological and neurophysiological measurements. Major technical achievements include the creation of hierarchical feature sets created from both audio and imagery data.

  5. The Case for Open Source: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    Open source has continued to evolve and in the past three years the development of a graphical user interface has made it increasingly accessible and viable for end users without special training. Open source relies to a great extent on the free software movement. In this context, the term free refers not to cost, but to the freedom users have to…

  6. An Open Source approach to automated hydrological analysis of ungauged drainage basins in Serbia using R and SAGA

    NASA Astrophysics Data System (ADS)

    Zlatanovic, Nikola; Milovanovic, Irina; Cotric, Jelena

    2014-05-01

    Drainage basins are for the most part ungauged or poorly gauged not only in Serbia but in most parts of the world, usually due to insufficient funds, but also the decommission of river gauges in upland catchments to focus on downstream areas which are more populated. Very often, design discharges are needed for these streams or rivers where no streamflow data is available, for various applications. Examples include river training works for flood protection measures or erosion control, design of culverts, water supply facilities, small hydropower plants etc. The estimation of discharges in ungauged basins is most often performed using rainfall-runoff models, whose parameters heavily rely on geomorphometric attributes of the basin (e.g. catchment area, elevation, slopes of channels and hillslopes etc.). The calculation of these, as well as other paramaters, is most often done in GIS (Geographic Information System) software environments. This study deals with the application of freely available and open source software and datasets for automating rainfall-runoff analysis of ungauged basins using methodologies currently in use hydrological practice. The R programming language was used for scripting and automating the hydrological calculations, coupled with SAGA GIS (System for Automated Geoscientivic Analysis) for geocomputing functions and terrain analysis. Datasets used in the analyses include the freely available SRTM (Shuttle Radar Topography Mission) terrain data, CORINE (Coordination of Information on the Environment) Land Cover data, as well as soil maps and rainfall data. The choice of free and open source software and datasets makes the project ideal for academic and research purposes and cross-platform projects. The geomorphometric module was tested on more than 100 catchments throughout Serbia and compared to manually calculated values (using topographic maps). The discharge estimation module was tested on 21 catchments where data were available and compared

  7. Study protocol: a randomised controlled trial of the effects of a multi-modal exercise program on cognition and physical functioning in older women

    PubMed Central

    2012-01-01

    Background Intervention studies testing the efficacy of cardiorespiratory exercise have shown some promise in terms of improving cognitive function in later life. Recent developments suggest that a multi-modal exercise intervention that includes motor as well as physical training and requires sustained attention and concentration, may better elicit the actual potency of exercise to enhance cognitive performance. This study will test the effect of a multi-modal exercise program, for older women, on cognitive and physical functioning. Methods/design This randomised controlled trial involves community dwelling women, without cognitive impairment, aged 65–75 years. Participants are randomised to exercise intervention or non-exercise control groups, for 16 weeks. The intervention consists of twice weekly, 60 minute, exercise classes incorporating aerobic, strength, balance, flexibility, co-ordination and agility training. Primary outcomes are measures of cognitive function and secondary outcomes include physical functioning and a neurocognitive biomarker (brain derived neurotrophic factor). Measures are taken at baseline and 16 weeks later and qualitative data related to the experience and acceptability of the program are collected from a sub-sample of the intervention group. Discussion If this randomised controlled trial demonstrates that multimodal exercise (that includes motor fitness training) can improve cognitive performance in later life, the benefits will be two-fold. First, an inexpensive, effective strategy will have been developed that could ameliorate the increased prevalence of age-related cognitive impairment predicted to accompany population ageing. Second, more robust evidence will have been provided about the mechanisms that link exercise to cognitive improvement allowing future research to be better focused and potentially more productive. Trial registration Australian and New Zealand Clinical Trial Registration Number: ANZCTR12612000451808 PMID

  8. Flood hazard mapping using open source hydrological tools

    NASA Astrophysics Data System (ADS)

    Tollenaar, Daniel; Wensveen, Lex; Winsemius, Hessel; Schellekens, Jaap

    2014-05-01

    Commonly, flood hazard maps are produced by building detailed hydrological and hydraulic models. These models are forced and parameterized by locally available, high resolution and preferably high quality data. The models use a high spatio-temporal resolution, resulting in large computational effort. Also, many hydraulic packages that solve 1D (canal) and 2D (overland) shallow water equations, are not freeware nor open source. In this contribution, we evaluate whether simplified open source data and models can be used for a rapid flood hazard assessment and to highlight areas where more detail may be required. The validity of this approach is tested by using four combinations of open-source tools: (1) a global hydrological model (PCR-GLOBWB, Van Beek and Bierkens, 2009) with a static inundation routine (GLOFRIS, Winsemius et al. 2013); (2) a global hydrological model with a dynamic inundation model (Subgrid, Stelling, 2012); (3) a local hydrological model (WFLOW) with a static inundation routine; (4) and a local hydrological model with a dynamic inundation model. The applicability of tools is assessed on (1) accuracy to reproduce the phenomenon, (2) time for model setup and (3) computational time. The performance is tested in a case study in the Rio Mamoré, one of the tributaries of the Amazone River (230,000 km2). References: Stelling, G.S.: Quadtree flood simulations with sub-grid digital elevation models, Proceedings of the ICE - Water Management, Volume 165, Issue 10, 01 November 2012 , pages 567 -580 Winsemius, H. C., Van Beek, L. P. H., Jongman, B., Ward, P. J., and Bouwman, A.: A framework for global river flood risk assessments, Hydrol. Earth Syst. Sci. Discuss., 9, 9611-9659, doi:10.5194/hessd-9-9611-2012, 2012 Van Beek, L. P. H. and Bierkens, M. F. P.: The global hydrological model PCR-GLOBWB: conceptualization, parameterization and verification, Dept. of Physical Geography, Utrecht University, Utrecht, available at: http

  9. Physics and 3D in Flash Simulations: Open Source Reality

    NASA Astrophysics Data System (ADS)

    Harold, J. B.; Dusenbery, P.

    2009-12-01

    Over the last decade our ability to deliver simulations over the web has steadily advanced. The improvements in speed of the Adobe Flash engine, and the development of open source tools to expand it, allow us to deliver increasingly sophisticated simulation based games through the browser, with no additional downloads required. In this paper we will present activities we are developing as part of two asteroids education projects: Finding NEO (funded through NSF and NASA SMD), and Asteroids! (funded through NSF). The first activity is Rubble!, an asteroids deflection game built on the open source Box2D physics engine. This game challenges players to push asteroids in to safe orbits before they crash in to the Earth. The Box2D engine allows us to go well beyond simple 2-body orbital calculations and incorporate “rubble piles”. These objects, which are representative of many asteroids, are composed of 50 or more individual rocks which gravitationally bind and separate in realistic ways. Even bombs can be modeled with sufficient physical accuracy to convince players of the hazards of trying to “blow up” incoming asteroids. The ability to easily build games based on underlying physical models allows us to address physical misconceptions in a natural way: by having the player operate in a world that directly collides with those misconceptions. Rubble! provides a particularly compelling example of this due to the variety of well documented misconceptions regarding gravity. The second activity is a Light Curve challenge, which uses the open source PaperVision3D tools to analyze 3D asteroid models. The goal of this activity is to introduce the player to the concept of “light curves”, measurements of asteroid brightness over time which are used to calculate the asteroid’s period. These measurements can even be inverted to generate three dimensional models of asteroids that are otherwise too small and distant to directly image. Through the use of the Paper

  10. Improving Data Catalogs with Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Schweitzer, R.; Hankin, S.; O'Brien, K.

    2013-12-01

    The Global Earth Observation Integrated Data Environment (GEO-IDE) is NOAA's effort to successfully integrate data and information with partners in the national US-Global Earth Observation System (US-GEO) and the international Global Earth Observation System of Systems (GEOSS). As part of the GEO-IDE, the Unified Access Framework (UAF) is working to build momentum towards the goal of increased data integration and interoperability. The UAF project is moving towards this goal with an approach that includes leveraging well known and widely used standards, as well as free and open source software. The UAF project shares the widely held conviction that the use of data standards is a key ingredient necessary to achieve interoperability. Many community-based consensus standards fail, though, due to poor compliance. Compliance problems emerge for many reasons: because the standards evolve through versions, because documentation is ambiguous or because individual data providers find the standard inadequate as-is to meet their special needs. In addition, minimalist use of standards will lead to a compliant service, but one which is of low quality. In this presentation, we will be discussing the UAF effort to build a catalog cleaning tool which is designed to crawl THREDDS catalogs, analyze the data available, and then build a 'clean' catalog of data which is standards compliant and has a uniform set of data access services available. These data services include, among others, OPeNDAP, Web Coverage Service (WCS) and Web Mapping Service (WMS). We will also discuss how we are utilizing free and open source software and services to both crawl, analyze and build the clean data catalog, as well as our efforts to help data providers improve their data catalogs. We'll discuss the use of open source software such as DataNucleus, Thematic Realtime Environmental Distributed Data Services (THREDDS), ncISO and the netCDF Java Common Data Model (CDM). We'll also demonstrate how we are

  11. Inexpensive Open-Source Data Logging in the Field

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2013-12-01

    I present a general-purpose open-source field-capable data logger, which provides a mechanism to develop dense networks of inexpensive environmental sensors. This data logger was developed as a low-power variant of the Arduino open-source development system, and is named the ALog ("Arduino Logger") BottleLogger (it is slim enough to fit inside a Nalgene water bottle) version 1.0. It features an integrated high-precision real-time clock, SD card slot for high-volume data storage, and integrated power switching. The ALog can interface with sensors via six analog/digital pins, two digital pins, and one digital interrupt pin that can read event-based inputs, such as those from a tipping-bucket rain gauge. We have successfully tested the ALog BottleLogger with ultrasonic rangefinders (for water stage and snow accumulation and melt), temperature sensors, tipping-bucket rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that it triggers based on events. The source code for the ALog, including functions to interface with a a range of commercially-available sensors, is provided as an Arduino C++ library with example implementations. All schematics, circuit board layouts, and source code files are open-source and freely available under GNU GPL v3.0 and Creative Commons Attribution-ShareAlike 3.0 Unported licenses. Through this work, we hope to foster a community-driven movement to collect field environmental data on a budget that permits citizen-scientists and researchers from low-income countries to collect the same high-quality data as researchers in wealthy countries. These data can provide information about global change to managers, governments, scientists, and interested citizens worldwide. Watertight box with ALog BottleLogger data logger on the left and battery pack with 3 D cells on the right. Data can be collected for 3-5 years on one set of batteries.

  12. Open source GIS for HIV/AIDS management

    PubMed Central

    Vanmeulebrouk, Bas; Rivett, Ulrike; Ricketts, Adam; Loudon, Melissa

    2008-01-01

    Background Reliable access to basic services can improve a community's resilience to HIV/AIDS. Accordingly, work is being done to upgrade the physical infrastructure in affected areas, often employing a strategy of decentralised service provision. Spatial characteristics are one of the major determinants in implementing services, even in the smaller municipal areas, and good quality spatial information is needed to inform decision making processes. However, limited funds, technical infrastructure and human resource capacity result in little or no access to spatial information for crucial infrastructure development decisions at local level. This research investigated whether it would be possible to develop a GIS for basic infrastructure planning and management at local level. Given the resource constraints of the local government context, particularly in small municipalities, it was decided that open source software should be used for the prototype system. Results The design and development of a prototype system illustrated that it is possible to develop an open source GIS system that can be used within the context of local information management. Usability tests show a high degree of usability for the system, which is important considering the heavy workload and high staff turnover that characterises local government in South Africa. Local infrastructure management stakeholders interviewed in a case study of a South African municipality see the potential for the use of GIS as a communication tool and are generally positive about the use of GIS for these purposes. They note security issues that may arise through the sharing of information, lack of skills and resource constraints as the major barriers to adoption. Conclusion The case study shows that spatial information is an identified need at local level. Open source GIS software can be used to develop a system to provide local-level stakeholders with spatial information. However, the suitability of the technology

  13. Integrating HCI Specialists into Open Source Software Development Projects

    NASA Astrophysics Data System (ADS)

    Hedberg, Henrik; Iivari, Netta

    Typical open source software (OSS) development projects are organized around technically talented developers, whose communication is based on technical aspects and source code. Decision-making power is gained through proven competence and activity in the project, and non-technical end-user opinions are too many times neglected. In addition, also human-computer interaction (HCI) specialists have encountered difficulties in trying to participate in OSS projects, because there seems to be no clear authority and responsibility for them. In this paper, based on HCI and OSS literature, we introduce an extended OSS development project organization model that adds a new level of communication and roles for attending human aspects of software. The proposed model makes the existence of HCI specialists visible in the projects, and promotes interaction between developers and the HCI specialists in the course of a project.

  14. GRASS GIS: The first Open Source Temporal GIS

    NASA Astrophysics Data System (ADS)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management

  15. The Pixhawk Open-Source Computer Vision Framework for Mavs

    NASA Astrophysics Data System (ADS)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  16. Introducing djatoka: a reuse friendly, open source JPEG image server

    SciTech Connect

    Chute, Ryan M; Van De Sompel, Herbert

    2008-01-01

    The ISO-standardized JPEG 2000 image format has started to attract significant attention. Support for the format is emerging in major consumer applications, and the cultural heritage community seriously considers it a viable format for digital preservation. So far, only commercial image servers with JPEG 2000 support have been available. They come with significant license fees and typically provide the customers with limited extensibility capabilities. Here, we introduce djatoka, an open source JPEG 2000 image server with an attractive basic feature set, and extensibility under control of the community of implementers. We describe djatoka, and point at demonstrations that feature digitized images of marvelous historical manuscripts from the collections of the British Library and the University of Ghent. We also caIl upon the community to engage in further development of djatoka.

  17. Dentocase - open-source education management system in dentistry.

    PubMed

    Peroz, I; Seidel, O; Böning, K; Bösel, C; Schütte, U

    2004-04-01

    Since 2001, an interdisciplinary project on multimedia education in medicine has been sponsored by the Federal Ministry of Education and Research at the Charité. One part of the project is on dentistry. In the light of the results of a survey of dental students, an Internet-based education management system was created using open-source back-end systems. It supports four didactic levels for editing documentation of patient treatments. Each level corresponds to the learning abilities of the students. The patient documentation is organized to simulate the working methods of a physician or dentist. The system was tested for the first time by students in the summer semester of 2003 and has been used since the winter semester of 2003 as part of the curriculum. PMID:15516095

  18. Spatial Information Processing: Standards-Based Open Source Visualization Technology

    NASA Astrophysics Data System (ADS)

    Hogan, P.

    2009-12-01

    . Spatial information intelligence is a global issue that will increasingly affect our ability to survive as a species. Collectively we must better appreciate the complex relationships that make life on Earth possible. Providing spatial information in its native context can accelerate our ability to process that information. To maximize this ability to process information, three basic elements are required: data delivery (server technology), data access (client technology), and data processing (information intelligence). NASA World Wind provides open source client and server technologies based on open standards. The possibilities for data processing and data sharing are enhanced by this inclusive infrastructure for geographic information. It is interesting that this open source and open standards approach, unfettered by proprietary constraints, simultaneously provides for entirely proprietary use of this same technology. 1. WHY WORLD WIND? NASA World Wind began as a single program with specific functionality, to deliver NASA content. But as the possibilities for virtual globe technology became more apparent, we found that while enabling a new class of information technology, we were also getting in the way. Researchers, developers and even users expressed their desire for World Wind functionality in ways that would service their specific needs. They want it in their web pages. They want to add their own features. They want to manage their own data. They told us that only with this kind of flexibility, could their objectives and the potential for this technology be truly realized. World Wind client technology is a set of development tools, a software development kit (SDK) that allows a software engineer to create applications requiring geographic visualization technology. 2. MODULAR COMPONENTRY Accelerated evolution of a technology requires that the essential elements of that technology be modular components such that each can advance independent of the other

  19. Performance testing open source products for the TMT event service

    NASA Astrophysics Data System (ADS)

    Gillies, K.; Bhate, Yogesh

    2014-07-01

    The software system for TMT is a distributed system with many components on many computers. Each component integrates with the overall system using a set of software services. The Event Service is a publish-subscribe message system that allows the distribution of demands and other events. The performance requirements for the Event Service are demanding with a goal of over 60 thousand events/second. This service is critical to the success of the TMT software architecture; therefore, a project was started to survey the open source and commercial market for viable software products. A trade study led to the selection of five products for thorough testing using a specially constructed computer/network configuration and test suite. The best performing product was chosen as the basis of a prototype Event Service implementation. This paper describes the process and performance tests conducted by Persistent Systems that led to the selection of the product for the prototype Event Service.

  20. Open-Source Software in Computational Research: A Case Study

    DOE PAGESBeta

    Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; Gel, Aytekin; Pannala, Sreekanth

    2008-01-01

    A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

  1. Open source cardiology electronic health record development for DIGICARDIAC implementation

    NASA Astrophysics Data System (ADS)

    Dugarte, Nelson; Medina, Rubén.; Huiracocha, Lourdes; Rojas, Rubén.

    2015-12-01

    This article presents the development of a Cardiology Electronic Health Record (CEHR) system. Software consists of a structured algorithm designed under Health Level-7 (HL7) international standards. Novelty of the system is the integration of high resolution ECG (HRECG) signal acquisition and processing tools, patient information management tools and telecardiology tools. Acquisition tools are for management and control of the DIGICARDIAC electrocardiograph functions. Processing tools allow management of HRECG signal analysis searching for indicative patterns of cardiovascular pathologies. Telecardiology tools incorporation allows system communication with other health care centers decreasing access time to the patient information. CEHR system was completely developed using open source software. Preliminary results of process validation showed the system efficiency.

  2. IP address management : augmenting Sandia's capabilities through open source tools.

    SciTech Connect

    Nayar, R. Daniel

    2005-08-01

    Internet Protocol (IP) address management is an increasingly growing concern at Sandia National Laboratories (SNL) and the networking community as a whole. The current state of the available IP addresses indicates that they are nearly exhausted. Currently SNL doesn't have the justification to obtain more IP address space from Internet Assigned Numbers Authority (IANA). There must exist a local entity to manage and allocate IP assignments efficiently. Ongoing efforts at Sandia have been in the form of a multifunctional database application notably known as Network Information System (NWIS). NWIS is a database responsible for a multitude of network administrative services including IP address management. This study will explore the feasibility of augmenting NWIS's IP management capabilities utilizing open source tools. Modifications of existing capabilities to better allocate available IP address space are studied.

  3. Open-Source Software in Computational Research: A Case Study

    SciTech Connect

    Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; Gel, Aytekin; Pannala, Sreekanth

    2008-01-01

    A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized in the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.

  4. Management of Astronomical Software Projects with Open Source Tools

    NASA Astrophysics Data System (ADS)

    Briegel, F.; Bertram, T.; Berwein, J.; Kittmann, F.

    2010-12-01

    In this paper we will offer an innovative approach to managing the software development process with free open source tools, for building and automated testing, a system to automate the compile/test cycle on a variety of platforms to validate code changes, using virtualization to compile in parallel on various operating system platforms, version control and change management, enhanced wiki and issue tracking system for online documentation and reporting and groupware tools as they are: blog, discussion and calendar. Initially starting with the Linc-Nirvana instrument a new project and configuration management tool for developing astronomical software was looked for. After evaluation of various systems of this kind, we are satisfied with the selection we are using now. Following the lead of Linc-Nirvana most of the other software projects at the MPIA are using it now.

  5. Conceptual Architecture of Building Energy Management Open Source Software (BEMOSS)

    SciTech Connect

    Khamphanchai, Warodom; Saha, Avijit; Rathinavel, Kruthika; Kuzlu, Murat; Pipattanasomporn, Manisa; Rahman, Saifur; Akyol, Bora A.; Haack, Jereme N.

    2014-12-01

    The objective of this paper is to present a conceptual architecture of a Building Energy Management Open Source Software (BEMOSS) platform. The proposed BEMOSS platform is expected to improve sensing and control of equipment in small- and medium-sized buildings, reduce energy consumption and help implement demand response (DR). It aims to offer: scalability, robustness, plug and play, open protocol, interoperability, cost-effectiveness, as well as local and remote monitoring. In this paper, four essential layers of BEMOSS software architecture -- namely User Interface, Application and Data Management, Operating System and Framework, and Connectivity layers -- are presented. A laboratory test bed to demonstrate the functionality of BEMOSS located at the Advanced Research Institute of Virginia Tech is also briefly described.

  6. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  7. IntAct: an open source molecular interaction database

    PubMed Central

    Hermjakob, Henning; Montecchi-Palazzi, Luisa; Lewington, Chris; Mudali, Sugath; Kerrien, Samuel; Orchard, Sandra; Vingron, Martin; Roechert, Bernd; Roepstorff, Peter; Valencia, Alfonso; Margalit, Hanah; Armstrong, John; Bairoch, Amos; Cesareni, Gianni; Sherman, David; Apweiler, Rolf

    2004-01-01

    IntAct provides an open source database and toolkit for the storage, presentation and analysis of protein interactions. The web interface provides both textual and graphical representations of protein interactions, and allows exploring interaction networks in the context of the GO annotations of the interacting proteins. A web service allows direct computational access to retrieve interaction networks in XML format. IntAct currently contains ∼2200 binary and complex interactions imported from the literature and curated in collaboration with the Swiss-Prot team, making intensive use of controlled vocabularies to ensure data consistency. All IntAct software, data and controlled vocabularies are available at http://www.ebi.ac.uk/intact. PMID:14681455

  8. IntAct: an open source molecular interaction database.

    PubMed

    Hermjakob, Henning; Montecchi-Palazzi, Luisa; Lewington, Chris; Mudali, Sugath; Kerrien, Samuel; Orchard, Sandra; Vingron, Martin; Roechert, Bernd; Roepstorff, Peter; Valencia, Alfonso; Margalit, Hanah; Armstrong, John; Bairoch, Amos; Cesareni, Gianni; Sherman, David; Apweiler, Rolf

    2004-01-01

    IntAct provides an open source database and toolkit for the storage, presentation and analysis of protein interactions. The web interface provides both textual and graphical representations of protein interactions, and allows exploring interaction networks in the context of the GO annotations of the interacting proteins. A web service allows direct computational access to retrieve interaction networks in XML format. IntAct currently contains approximately 2200 binary and complex interactions imported from the literature and curated in collaboration with the Swiss-Prot team, making intensive use of controlled vocabularies to ensure data consistency. All IntAct software, data and controlled vocabularies are available at http://www.ebi.ac.uk/intact. PMID:14681455

  9. Open-source products for a lighting experiment device.

    PubMed

    Gildea, Kevin M; Milburn, Nelda

    2014-12-01

    The capabilities of open-source software and microcontrollers were used to construct a device for controlled lighting experiments. The device was designed to ascertain whether individuals with certain color vision deficiencies were able to discriminate between the red and white lights in fielded systems on the basis of luminous intensity. The device provided the ability to control the timing and duration of light-emitting diode (LED) and incandescent light stimulus presentations, to present the experimental sequence and verbal instructions automatically, to adjust LED and incandescent luminous intensity, and to display LED and incandescent lights with various spectral emissions. The lighting device could easily be adapted for experiments involving flashing or timed presentations of colored lights, or the components could be expanded to study areas such as threshold light perception and visual alerting systems. PMID:24281687

  10. Open Source GIS Connectors to NASA GES DISC Satellite Data

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Pham, Long; Yang, Wenli

    2014-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) houses a suite of high spatiotemporal resolution GIS data including satellite-derived and modeled precipitation, air quality, and land surface parameter data. The data are valuable to various GIS research and applications at regional, continental, and global scales. On the other hand, many GIS users, especially those from the ArcGIS community, have difficulties in obtaining, importing, and using our data due to factors such as the variety of data products, the complexity of satellite remote sensing data, and the data encoding formats. We introduce a simple open source ArcGIS data connector that significantly simplifies the access and use of GES DISC data in ArcGIS.

  11. An open source mobile platform for psychophysiological self tracking.

    PubMed

    Gaggioli, Andrea; Cipresso, Pietro; Serino, Silvia; Pioggia, Giovanni; Tartarisco, Gennaro; Baldus, Giovanni; Corda, Daniele; Riva, Giuseppe

    2012-01-01

    Self tracking is a recent trend in e-health that refers to the collection, elaboration and visualization of personal health data through ubiquitous computing tools such as mobile devices and wearable sensors. Here, we describe the design of a mobile self-tracking platform that has been specifically designed for clinical and research applications in the field of mental health. The smartphone-based application allows collecting a) self-reported feelings and activities from pre-programmed questionnaires; b) electrocardiographic (ECG) data from a wireless sensor platform worn by the user; c) movement activity information obtained from a tri-axis accelerometer embedded in the wearable platform. Physiological signals are further processed by the application and stored on the smartphone's memory. The mobile data collection platform is free and released under an open source licence to allow wider adoption by the research community (download at: http://sourceforge.net/projects/psychlog/). PMID:22356974

  12. JSim, an open-source modeling system for data analysis

    PubMed Central

    Bassingthwaighte, James B.

    2013-01-01

    JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it

  13. Open Knee: Open Source Modeling and Simulation in Knee Biomechanics.

    PubMed

    Erdemir, Ahmet

    2016-02-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical functions of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor-intensive reproduction of model development steps can be avoided. Interested parties can immediately utilize readily available models for scientific discovery and clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes the detailed anatomical representation of the joint's major tissue structures and their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next-generation knee models is noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  14. JSim, an open-source modeling system for data analysis.

    PubMed

    Butterworth, Erik; Jardine, Bartholomew E; Raymond, Gary M; Neal, Maxwell L; Bassingthwaighte, James B

    2013-01-01

    JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it

  15. The Future of ECHO: Evaluating Open Source Possibilities

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.

  16. Comparative Analysis Study of Open Source GIS in Malaysia

    NASA Astrophysics Data System (ADS)

    Rasid, Muhammad Zamir Abdul; Kamis, Naddia; Khuizham Abd Halim, Mohd

    2014-06-01

    Open source origin might appear like a major prospective change which is qualified to deliver in various industries and also competing means in developing countries. The leading purpose of this research study is to basically discover the degree of adopting Open Source Software (OSS) that is connected with Geographic Information System (GIS) application within Malaysia. It was derived based on inadequate awareness with regards to the origin ideas or even on account of techie deficiencies in the open origin instruments. This particular research has been carried out based on two significant stages; the first stage involved a survey questionnaire: to evaluate the awareness and acceptance level based on the comparison feedback regarding OSS and commercial GIS. This particular survey was conducted among three groups of candidates: government servant, university students and lecturers, as well as individual. The approaches of measuring awareness in this research were based on a comprehending signal plus a notion signal for each survey questions. These kinds of signs had been designed throughout the analysis in order to supply a measurable and also a descriptive signal to produce the final result. The second stage involved an interview session with a major organization that carries out available origin internet GIS; the Federal Department of Town and Country Planning Peninsular Malaysia (JPBD). The impact of this preliminary study was to understand the particular viewpoint of different groups of people on the available origin, and also their insufficient awareness with regards to origin ideas as well as likelihood may be significant root of adopting level connected with available origin options.

  17. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  18. Open Source Software Reuse in the Airborne Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Khudikyan, S. E.; Hart, A. F.; Hardman, S.; Freeborn, D.; Davoodi, F.; Resneck, G.; Mattmann, C. A.; Crichton, D. J.

    2012-12-01

    Earth science airborne missions play an important role in helping humans understand our climate. A challenge for airborne campaigns in contrast to larger NASA missions is that their relatively modest budgets do not permit the ground-up development of data management tools. These smaller missions generally consist of scientists whose primary focus is on the algorithmic and scientific aspects of the mission, which often leaves data management software and systems to be addressed as an afterthought. The Airborne Cloud Computing Environment (ACCE), developed by the Jet Propulsion Laboratory (JPL) to support Earth Science Airborne Program, is a reusable, multi-mission data system environment for NASA airborne missions. ACCE provides missions with a cloud-enabled platform for managing their data. The platform consists of a comprehensive set of robust data management capabilities that cover everything from data ingestion and archiving, to algorithmic processing, and to data delivery. Missions interact with this system programmatically as well as via browser-based user interfaces. The core components of ACCE are largely based on Apache Object Oriented Data Technology (OODT), an open source information integration framework at the Apache Software Foundation (ASF). Apache OODT is designed around a component-based architecture that allows for selective combination of components to create highly configurable data management systems. The diverse and growing community that currently contributes to Apache OODT fosters on-going growth and maturation of the software. ACCE's key objective is to reduce cost and risks associated with developing data management systems for airborne missions. Software reuse plays a prominent role in mitigating these problems. By providing a reusable platform based on open source software, ACCE enables airborne missions to allocate more resources to their scientific goals, thereby opening the doors to increased scientific discovery.

  19. HELIOS: A new open-source radiative transfer code

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin

    2015-12-01

    I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net

  20. An Open Source modular platform for hydrological model implementation

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not

  1. Open source software engineering for geoscientific modeling applications

    NASA Astrophysics Data System (ADS)

    Bilke, L.; Rink, K.; Fischer, T.; Kolditz, O.

    2012-12-01

    OpenGeoSys (OGS) is a scientific open source project for numerical simulation of thermo-hydro-mechanical-chemical (THMC) processes in porous and fractured media. The OGS software development community is distributed all over the world and people with different backgrounds are contributing code to a complex software system. The following points have to be addressed for successful software development: - Platform independent code - A unified build system - A version control system - A collaborative project web site - Continuous builds and testing - Providing binaries and documentation for end users OGS should run on a PC as well as on a computing cluster regardless of the operating system. Therefore the code should not include any platform specific feature or library. Instead open source and platform independent libraries like Qt for the graphical user interface or VTK for visualization algorithms are used. A source code management and version control system is a definite requirement for distributed software development. For this purpose Git is used, which enables developers to work on separate versions (branches) of the software and to merge those versions at some point to the official one. The version control system is integrated into an information and collaboration website based on a wiki system. The wiki is used for collecting information such as tutorials, application examples and case studies. Discussions take place in the OGS mailing list. To improve code stability and to verify code correctness a continuous build and testing system, based on the Jenkins Continuous Integration Server, has been established. This server is connected to the version control system and does the following on every code change: - Compiles (builds) the code on every supported platform (Linux, Windows, MacOS) - Runs a comprehensive test suite of over 120 benchmarks and verifies the results Runs software development related metrics on the code (like compiler warnings, code complexity

  2. Common characteristics of open source software development and applicability for drug discovery: a systematic review

    PubMed Central

    2011-01-01

    Background Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. Methods A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Results Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. Conclusions We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents. PMID:21955914

  3. Assesing Ecohydrological Impacts of Forest Disturbance using Open Source Software

    NASA Astrophysics Data System (ADS)

    Lovette, J. P.; Chang, T.; Treglia, M.; Gan, T.; Duncan, J.

    2014-12-01

    In the past 30 years, land management protocols, climate change, and land use have radically changed the frequency and magnitudes of disturbance regimes. Landscape scale disturbances can change a forest structure, resulting in impacts on adjacent watersheds that may affect water amount/quality for human and natural resource use. Our project quantifies hydrologic changes from of a suite of disturbance events resulting in vegetation cover shifts at watersheds across the continental United States. These disturbance events include: wildfire, insect/disease, deforestation(logging), hurricanes, ice storms, and human land use. Our major question is: Can the effects of disturbance on ecohydrology be generalized across regions, time scales, and spatial scales? Using a workflow of open source tools, and utilizing publicly available data, this work could be extended and leveraged by other researchers. Spatial data on disturbance include the MODIS Global Disturbance Index (NTSG), Landsat 7 Global Forest Change (Hansen dataset), and the Degree of Human Modification (Theobald dataset). Ecohydrologic response data includes USGS NWIS, USFS-LTER climDB/hydroDB, and the CUAHSI HIS.

  4. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    PubMed Central

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  5. Open-Source Software for Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Oyafuso, Fabiano; Hua, Hook; Tisdale, Edwin; Hart, Don

    2004-01-01

    The Nanoelectronic Modeling 3-D (NEMO 3-D) computer program has been upgraded to open-source status through elimination of license-restricted components. The present version functions equivalently to the version reported in "Software for Numerical Modeling of Nanoelectronic Devices" (NPO-30520), NASA Tech Briefs, Vol. 27, No. 11 (November 2003), page 37. To recapitulate: NEMO 3-D performs numerical modeling of the electronic transport and structural properties of a semiconductor device that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantum-mechanical behavior of the device resolved to the atomistic level of granularity. NEMO 3-D solves the applicable quantum matrix equation on a Beowulf-class cluster computer by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. A prior upgrade of NEMO 3-D incorporated a capability for a strain treatment, parameterized for bulk material properties of GaAs and InAs, for two tight-binding submodels. NEMO 3-D has been demonstrated in atomistic analyses of effects of disorder in alloys and, in particular, in bulk In(x)Ga(1-x)As and in In(0.6)Ga(0.4)As quantum dots.

  6. Open Source Cloud Computing for Transiting Planet Discovery

    NASA Astrophysics Data System (ADS)

    McCullough, Peter R.; Fleming, Scott W.; Zonca, Andrea; Flowers, Jack; Nguyen, Duy Cuong; Sinkovits, Robert; Machalek, Pavel

    2014-06-01

    We provide an update on the development of the open-source software suite designed to detect exoplanet transits using high-performance and cloud computing resources (https://github.com/openEXO). Our collaboration continues to grow as we are developing algorithms and codes related to the detection of transit-like events, especially in Kepler data, Kepler 2.0 and TESS data when available. Extending the work of Berriman et al. (2010, 2012), we describe our use of the XSEDE-Gordon supercomputer and Amazon EMR cloud to search for aperiodic transit-like events in Kepler light curves. Such events may be caused by circumbinary planets or transiting bodies, either planets or stars, with orbital periods comparable to or longer than the observing baseline such that only one transit is observed. As a bonus, we use the same code to find stellar flares too; whereas transits reduce the flux in a box-shaped profile, flares increase the flux in a fast-rise, exponential-decay (FRED) profile that nevertheless can be detected reliably with a square-wave finder.

  7. Open source simulation tool for electrophoretic stacking, focusing, and separation.

    PubMed

    Bercovici, Moran; Lele, Sanjiva K; Santiago, Juan G

    2009-02-01

    We present the development, formulation, and performance of a new simulation tool for electrophoretic preconcentration and separation processes such as capillary electrophoresis, isotachophoresis, and field amplified sample stacking. The code solves the one-dimensional transient advection-diffusion equations for multiple multivalent weak electrolytes (including ampholytes) and includes a model for pressure-driven flow and Taylor-Aris dispersion. The code uses a new approach for the discretization of the equations, consisting of a high resolution compact scheme which is combined with an adaptive grid algorithm. We show that this combination allows for accurate resolution of sharp concentration gradients at high electric fields, while at the same time significantly reducing the computational time. We demonstrate smooth, stable, and accurate solutions at current densities as high as 5000A/m(2) using only 300 grid points, and a 75-fold reduction in computational time compared with equivalent uniform grid techniques. The code is available as an open source for free at http://microfluidics.stanford.edu. PMID:19124132

  8. Digital time stamping system based on open source technologies.

    PubMed

    Miskinis, Rimantas; Smirnov, Dmitrij; Urba, Emilis; Burokas, Andrius; Malysko, Bogdan; Laud, Peeter; Zuliani, Francesco

    2010-03-01

    A digital time stamping system based on open source technologies (LINUX-UBUNTU, OpenTSA, OpenSSL, MySQL) is described in detail, including all important testing results. The system, called BALTICTIME, was developed under a project sponsored by the European Commission under the Program FP 6. It was designed to meet the requirements posed to the systems of legal and accountable time stamping and to be applicable to the hardware commonly used by the national time metrology laboratories. The BALTICTIME system is intended for the use of governmental and other institutions as well as personal bodies. Testing results demonstrate that the time stamps issued to the user by BALTICTIME and saved in BALTICTIME's archives (which implies that the time stamps are accountable) meet all the regulatory requirements. Moreover, the BALTICTIME in its present implementation is able to issue more than 10 digital time stamps per second. The system can be enhanced if needed. The test version of the BALTICTIME service is free and available at http://baltictime. pfi.lt:8080/btws/ and http://baltictime.lnmc.lv:8080/btws/. PMID:20211793

  9. Agile Methods for Open Source Safety-Critical Software

    PubMed Central

    Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-01-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545

  10. Dinosaur: A Refined Open-Source Peptide MS Feature Detector

    PubMed Central

    2016-01-01

    In bottom-up mass spectrometry (MS)-based proteomics, peptide isotopic and chromatographic traces (features) are frequently used for label-free quantification in data-dependent acquisition MS but can also be used for the improved identification of chimeric spectra or sample complexity characterization. Feature detection is difficult because of the high complexity of MS proteomics data from biological samples, which frequently causes features to intermingle. In addition, existing feature detection algorithms commonly suffer from compatibility issues, long computation times, or poor performance on high-resolution data. Because of these limitations, we developed a new tool, Dinosaur, with increased speed and versatility. Dinosaur has the functionality to sample algorithm computations through quality-control plots, which we call a plot trail. From the evaluation of this plot trail, we introduce several algorithmic improvements to further improve the robustness and performance of Dinosaur, with the detection of features for 98% of MS/MS identifications in a benchmark data set, and no other algorithm tested in this study passed 96% feature detection. We finally used Dinosaur to reimplement a published workflow for peptide identification in chimeric spectra, increasing chimeric identification from 26% to 32% over the standard workflow. Dinosaur is operating-system-independent and is freely available as open source on https://github.com/fickludd/dinosaur. PMID:27224449

  11. Dinosaur: A Refined Open-Source Peptide MS Feature Detector.

    PubMed

    Teleman, Johan; Chawade, Aakash; Sandin, Marianne; Levander, Fredrik; Malmström, Johan

    2016-07-01

    In bottom-up mass spectrometry (MS)-based proteomics, peptide isotopic and chromatographic traces (features) are frequently used for label-free quantification in data-dependent acquisition MS but can also be used for the improved identification of chimeric spectra or sample complexity characterization. Feature detection is difficult because of the high complexity of MS proteomics data from biological samples, which frequently causes features to intermingle. In addition, existing feature detection algorithms commonly suffer from compatibility issues, long computation times, or poor performance on high-resolution data. Because of these limitations, we developed a new tool, Dinosaur, with increased speed and versatility. Dinosaur has the functionality to sample algorithm computations through quality-control plots, which we call a plot trail. From the evaluation of this plot trail, we introduce several algorithmic improvements to further improve the robustness and performance of Dinosaur, with the detection of features for 98% of MS/MS identifications in a benchmark data set, and no other algorithm tested in this study passed 96% feature detection. We finally used Dinosaur to reimplement a published workflow for peptide identification in chimeric spectra, increasing chimeric identification from 26% to 32% over the standard workflow. Dinosaur is operating-system-independent and is freely available as open source on https://github.com/fickludd/dinosaur . PMID:27224449

  12. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts. PMID:24934301

  13. Cloud based, Open Source Software Application for Mitigating Herbicide Drift

    NASA Astrophysics Data System (ADS)

    Saraswat, D.; Scott, B.

    2014-12-01

    The spread of herbicide resistant weeds has resulted in the need for clearly marked fields. In response to this need, the University of Arkansas Cooperative Extension Service launched a program named Flag the Technology in 2011. This program uses color-coded flags as a visual alert of the herbicide trait technology within a farm field. The flag based program also serves to help avoid herbicide misapplication and prevent herbicide drift damage between fields with differing crop technologies. This program has been endorsed by Southern Weed Science Society of America and is attracting interest from across the USA, Canada, and Australia. However, flags have risk of misplacement or disappearance due to mischief or severe windstorms/thunderstorms, respectively. This presentation will discuss the design and development of a cloud-based, free application utilizing open-source technologies, called Flag the Technology Cloud (FTTCloud), for allowing agricultural stakeholders to color code their farm fields for indicating herbicide resistant technologies. The developed software utilizes modern web development practices, widely used design technologies, and basic geographic information system (GIS) based interactive interfaces for representing, color-coding, searching, and visualizing fields. This program has also been made compatible for a wider usability on different size devices- smartphones, tablets, desktops and laptops.

  14. Leveraging open-source software in large simulations at LLNL

    NASA Astrophysics Data System (ADS)

    Dubois, Paul F.

    2004-03-01

    Three intersecting forces are making possible a revolution in the construction of scientific programs. Object-oriented technology has made possible the creation of truly reusable components. The Internet and its search engines have made it possible to find and obtain appropriate components and obtain help in learning to use them. The open source movement has made the components much more reliable, removed economic barriers to reuse, and allowed users to contribute to their evolution and upkeep. Staff members at Lawrence Livermore National Laboratory are full participants in this movement, both contributing and using reusable components in key areas of science, mathematics, and computer science. We will discuss the use of such components in two efforts in particular: Kull, an ASCI code for modeling laser fusion targets, and CDAT, a tool used world-wide for climate data analysis. We will also briefly discuss the problem of building such a wide variety of software on LLNL's wide variety of exotic hardware, and what factors make this problem more difficult than it need be.

  15. A low-cost, open-source, wireless electrophysiology system.

    PubMed

    Ghomashchi, A; Zheng, Z; Majaj, N; Trumpis, M; Kiorpes, L; Viventi, J

    2014-01-01

    Many experiments in neuroscience require or would benefit tremendously from a wireless neural recording system. However, commercially available wireless systems are expensive, have moderate to high noise and are often not customizable. Academic wireless systems present impressive capabilities, but are not available for other labs to use. To overcome these limitations, we have developed an ultra-low noise 8 channel wireless electrophysiological data acquisition system using standard, commercially available components. The system is capable of recording many types of neurological signals, including EEG, ECoG, LFP and unit activity. With a diameter of just 25 mm and height of 9 mm, including a CR2032 Lithium coin cell battery, it is designed to fit into a small recording chamber while minimizing the overall implant height (Fig. 1 and 3). Using widely available parts we were able to keep the material cost of our system under $100 dollars. The complete design, including schematic, PCB layout, bill of materials and source code, will be released through an open source license, allowing other labs to modify the design to fit their needs. We have also developed a driver to acquire data using the BCI2000 software system. Feedback from the community will allow us to improve the design and create a more useful neuroscience research tool. PMID:25570656

  16. An Extensible Open-Source Compiler Infrastructure for Testing

    SciTech Connect

    Quinlan, D; Ur, S; Vuduc, R

    2005-12-09

    Testing forms a critical part of the development process for large-scale software, and there is growing need for automated tools that can read, represent, analyze, and transform the application's source code to help carry out testing tasks. However, the support required to compile applications written in common general purpose languages is generally inaccessible to the testing research community. In this paper, we report on an extensible, open-source compiler infrastructure called ROSE, which is currently in development at Lawrence Livermore National Laboratory. ROSE specifically targets developers who wish to build source-based tools that implement customized analyses and optimizations for large-scale C, C++, and Fortran90 scientific computing applications (on the order of a million lines of code or more). However, much of this infrastructure can also be used to address problems in testing, and ROSE is by design broadly accessible to those without a formal compiler background. This paper details the interactions between testing of applications and the ways in which compiler technology can aid in the understanding of those applications. We emphasize the particular aspects of ROSE, such as support for the general analysis of whole programs, that are particularly well-suited to the testing research community and the scale of the problems that community solves.

  17. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  18. Open source software and web services for designing therapeutic molecules.

    PubMed

    Singla, Deepak; Dhanda, Sandeep Kumar; Chauhan, Jagat Singh; Bhardwaj, Anshu; Brahmachari, Samir K; Raghava, Gajendra P S

    2013-01-01

    Despite the tremendous progress in the field of drug designing, discovering a new drug molecule is still a challenging task. Drug discovery and development is a costly, time consuming and complex process that requires millions of dollar and 10-15 years to bring new drug molecules in the market. This huge investment and long-term process are attributed to high failure rate, complexity of the problem and strict regulatory rules, in addition to other factors. Given the availability of 'big' data with ever improving computing power, it is now possible to model systems which is expected to provide time and cost effectiveness to drug discovery process. Computer Aided Drug Designing (CADD) has emerged as a fast alternative method to bring down the cost involved in discovering a new drug. In past, numerous computer programs have been developed across the globe to assist the researchers working in the field of drug discovery. Broadly, these programs can be classified in three categories, freeware, shareware and commercial software. In this review, we have described freeware or open-source software that are commonly used for designing therapeutic molecules. Major emphasis will be on software and web services in the field of chemo- or pharmaco-informatics that includes in silico tools used for computing molecular descriptors, inhibitors designing against drug targets, building QSAR models, and ADMET properties. PMID:23647540

  19. An Open Source Software Tool for Hydrologic Climate Change Assessment

    NASA Astrophysics Data System (ADS)

    Park, Dong Kwan; Shin, Mun-Ju; Kim, Young-Oh

    2015-04-01

    With the Intergovernmental Panel on Climate Change (IPCC) publishing Climate Change Assessment Reports containing updated forecasts and scenarios regularly, it is necessary to also periodically perform hydrologic assessments studies on these scenarios. The practical users including scientists and government people need to use handy tools that operate from climate input data of historical observations and climate change scenarios to rainfall-runoff simulation and assessment periodically. We propose HydroCAT (Hydrologic Climate change Assessment Tool), which is a flexible software tool designed to simplify and streamline hydrologic climate change assessment studies with the incorporation of: taking climate input values from general circulation models using the latest climate change scenarios; simulation of downscaled values using statistical downscaling methods; calibration and simulation of well-know multiple lumped conceptual hydrologic models; assessment of results using statistical methods. This package is designed in an open source, R-based, software package that includes an operating framework to support wide data frameworks, variety of hydrologic models, and climate change scenarios. The use of the software is demonstrated in a case study of the Geum River basin in Republic of Korea.

  20. Agile Methods for Open Source Safety-Critical Software.

    PubMed

    Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-08-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545

  1. An Open Source Platform for Earth Science Research and Applications

    NASA Astrophysics Data System (ADS)

    Hiatt, S. H.; Ganguly, S.; Melton, F. S.; Michaelis, A.; Milesi, C.; Nemani, R. R.; Votava, P.; Wang, W.; Zhang, G.; Nasa Ecological Forecasting Lab

    2010-12-01

    The Terrestrial Observation and Prediction System (TOPS) at NASA-ARC's Ecological Forecasting Lab produces a suite of gridded data products in near real-time that are designed to enhance management decisions related to various environmental phenomenon, as well as to advance scientific understanding of these ecosystem processes. While these data hold tremendous potential value for a wide range of disciplines, the large nature of these datasets presents challenges in their analysis and distribution. Additionally, remote sensing data and their derivative ecological models rely on quality ground-based observations for evaluating and validating model outputs. The Ecological Forecasting Lab addresses these challenges by developing a web-based data gateway, leveraging a completely open source software stack. TOPS data is organized and made accessible via an OPeNDAP server. Toolkits such as GDAL and Matplotlib are used within a Python web server to generate dynamic views of TOPS data that can be incorporated into web applictions, providing a simple interface for visualizing spatial and/or temporal trends. In order to facilitate collection of ground observations for validating and enhancing ecological models, we have implemented a web portal that allows volunteers to visualize current ecological conditions and to submit their observations. Initially we use this system to assist research related to plant phenology, but we plan to extend the system to support other areas of research as well.

  2. A global, open-source database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen; Jongman, Brenden; Bouwer, Laurens; Winsemius, Hessel; de Moel, Hans; Ward, Philip

    2016-04-01

    Accurate flood risk estimation is pivotal in that it enables risk-informed policies in disaster risk reduction, as emphasized in the recent Sendai framework for Disaster Risk Reduction. To improve our understanding of flood risk, models are now capable to provide actionable risk information on the (sub)global scale. Still the accuracy of their results is greatly limited by the lack of information on standards of protection to flood that are actually in place; and researchers thus take large assumptions on the extent of protection. With our work we propose a first global, open-source database of FLOod PROtection Standards, FLOPROS, covering a range of spatial scales. FLOPROS is structured in three layers of information, and merges them into one consistent database: 1) the Design layer contains empirical information about the standard of protection presently in place; 2) the Policy layer contains intended protection standards from normative documents; 3) the Model layer uses a validated numerical approach to calculate protection standards for areas not covered in the other layers. The FLOPROS database can be used for more accurate risk assessment exercises across scales. As the database should be continually updated to reflect new interventions, we invite researchers and practitioners to contribute information. Further, we look for partners within the risk community to participate in additional strategies to implement the amount and accuracy of information contained in this first version of FLOPROS.

  3. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  4. Open-Source Photometric System for Enzymatic Nitrate Quantification.

    PubMed

    Wittbrodt, B T; Squires, D A; Walbeck, J; Campbell, E; Campbell, W H; Pearce, J M

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique. PMID:26244342

  5. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  6. Open-Source Photometric System for Enzymatic Nitrate Quantification

    PubMed Central

    Wittbrodt, B. T.; Squires, D. A.; Walbeck, J.; Campbell, E.; Campbell, W. H.; Pearce, J. M.

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique. PMID:26244342

  7. MetaTrans: an open-source pipeline for metatranscriptomics.

    PubMed

    Martinez, Xavier; Pozuelo, Marta; Pascal, Victoria; Campos, David; Gut, Ivo; Gut, Marta; Azpiroz, Fernando; Guarner, Francisco; Manichanh, Chaysavanh

    2016-01-01

    To date, meta-omic approaches use high-throughput sequencing technologies, which produce a huge amount of data, thus challenging modern computers. Here we present MetaTrans, an efficient open-source pipeline to analyze the structure and functions of active microbial communities using the power of multi-threading computers. The pipeline is designed to perform two types of RNA-Seq analyses: taxonomic and gene expression. It performs quality-control assessment, rRNA removal, maps reads against functional databases and also handles differential gene expression analysis. Its efficacy was validated by analyzing data from synthetic mock communities, data from a previous study and data generated from twelve human fecal samples. Compared to an existing web application server, MetaTrans shows more efficiency in terms of runtime (around 2 hours per million of transcripts) and presents adapted tools to compare gene expression levels. It has been tested with a human gut microbiome database but also proposes an option to use a general database in order to analyze other ecosystems. For the installation and use of the pipeline, we provide a detailed guide at the following website (www.metatrans.org). PMID:27211518

  8. Nektar++: An open-source spectral/ hp element framework

    NASA Astrophysics Data System (ADS)

    Cantwell, C. D.; Moxey, D.; Comerford, A.; Bolis, A.; Rocco, G.; Mengaldo, G.; De Grazia, D.; Yakovlev, S.; Lombard, J.-E.; Ekelschot, D.; Jordi, B.; Xu, H.; Mohamied, Y.; Eskilsson, C.; Nelson, B.; Vos, P.; Biotto, C.; Kirby, R. M.; Sherwin, S. J.

    2015-07-01

    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/ hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy over low-order techniques at reduced computational cost for a given number of degrees of freedom. However, their proliferation is often limited by their complexity, which makes these methods challenging to implement and use. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectral/ hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.

  9. Special population planner 4 : an open source release.

    SciTech Connect

    Kuiper, J.; Metz, W.; Tanzman, E.

    2008-01-01

    Emergencies like Hurricane Katrina and the recent California wildfires underscore the critical need to meet the complex challenge of planning for individuals with special needs and for institutionalized special populations. People with special needs and special populations often have difficulty responding to emergencies or taking protective actions, and emergency responders may be unaware of their existence and situations during a crisis. Special Population Planner (SPP) is an ArcGIS-based emergency planning system released as an open source product. SPP provides for easy production of maps, reports, and analyses to develop and revise emergency response plans. It includes tools to manage a voluntary registry of data for people with special needs, integrated links to plans and documents, tools for response planning and analysis, preformatted reports and maps, and data on locations of special populations, facility and resource characteristics, and contacts. The system can be readily adapted for new settings without programming and is broadly applicable. Full documentation and a demonstration database are included in the release.

  10. An Open Source Embedding Code for the Condensed Phase

    NASA Astrophysics Data System (ADS)

    Genova, Alessandro; Ceresoli, Davide; Krishtal, Alisa; Andreussi, Oliviero; Distasio, Robert; Pavanello, Michele

    Work from our group as well as others has shown that for many systems such as molecular aggregates, liquids, and complex layered materials, subsystem Density-Functional Theory (DFT) is capable of immensely reducing the computational cost while providing a better and more intuitive insight into the underlying physics. We developed a massively parallel implementation of Subsystem DFT for the condensed phase into the open-source Quantum ESPRESSO software package. In this talk, we will discuss how we: (1) implemented such a flexible parallel framework aiming at the optimal load balancing; (2) simplified the solution of the electronic structure problem by allowing a fragment specific sampling of the first Brillouin Zone; (3) achieve enormous speedups by solving the electronic structure of each fragment in a unit cell smaller than the supersystem simulation cell, effectively introducing a fragment specific basis set, with no deterioration of the fully periodic simulation. As of March 14, 2016, the code has been released and is available to the public.

  11. MetaTrans: an open-source pipeline for metatranscriptomics

    PubMed Central

    Martinez, Xavier; Pozuelo, Marta; Pascal, Victoria; Campos, David; Gut, Ivo; Gut, Marta; Azpiroz, Fernando; Guarner, Francisco; Manichanh, Chaysavanh

    2016-01-01

    To date, meta-omic approaches use high-throughput sequencing technologies, which produce a huge amount of data, thus challenging modern computers. Here we present MetaTrans, an efficient open-source pipeline to analyze the structure and functions of active microbial communities using the power of multi-threading computers. The pipeline is designed to perform two types of RNA-Seq analyses: taxonomic and gene expression. It performs quality-control assessment, rRNA removal, maps reads against functional databases and also handles differential gene expression analysis. Its efficacy was validated by analyzing data from synthetic mock communities, data from a previous study and data generated from twelve human fecal samples. Compared to an existing web application server, MetaTrans shows more efficiency in terms of runtime (around 2 hours per million of transcripts) and presents adapted tools to compare gene expression levels. It has been tested with a human gut microbiome database but also proposes an option to use a general database in order to analyze other ecosystems. For the installation and use of the pipeline, we provide a detailed guide at the following website (www.metatrans.org). PMID:27211518

  12. Nucleophosmin integrates within the nucleolus via multi-modal interactions with proteins displaying R-rich linear motifs and rRNA

    PubMed Central

    Mitrea, Diana M; Cika, Jaclyn A; Guy, Clifford S; Ban, David; Banerjee, Priya R; Stanley, Christopher B; Nourse, Amanda; Deniz, Ashok A; Kriwacki, Richard W

    2016-01-01

    The nucleolus is a membrane-less organelle formed through liquid-liquid phase separation of its components from the surrounding nucleoplasm. Here, we show that nucleophosmin (NPM1) integrates within the nucleolus via a multi-modal mechanism involving multivalent interactions with proteins containing arginine-rich linear motifs (R-motifs) and ribosomal RNA (rRNA). Importantly, these R-motifs are found in canonical nucleolar localization signals. Based on a novel combination of biophysical approaches, we propose a model for the molecular organization within liquid-like droplets formed by the N-terminal domain of NPM1 and R-motif peptides, thus providing insights into the structural organization of the nucleolus. We identify multivalency of acidic tracts and folded nucleic acid binding domains, mediated by N-terminal domain oligomerization, as structural features required for phase separation of NPM1 with other nucleolar components in vitro and for localization within mammalian nucleoli. We propose that one mechanism of nucleolar localization involves phase separation of proteins within the nucleolus. DOI: http://dx.doi.org/10.7554/eLife.13571.001 PMID:26836305

  13. Propagation in waveguides with varying cross section and curvature: a new light on the role of supplementary modes in multi-modal methods.

    PubMed

    Maurel, Agnès; Mercier, Jean-François; Félix, Simon

    2014-06-01

    We present an efficient multi-modal method to describe the acoustic propagation in waveguides with varying curvature and cross section. A key feature is the use of a flexible geometrical transformation to a virtual space in which the waveguide is straight and has unitary cross section. In this new space, the pressure field has to satisfy a modified wave equation and associated modified boundary conditions. These boundary conditions are in general not satisfied by the Neumann modes, used for the series representation of the field. Following previous work, an improved modal method (MM) is presented, by means of the use of two supplementary modes. Resulting increased convergences are exemplified by comparison with the classical MM. Next, the following question is addressed: when the boundary conditions are verified by the Neumann modes, does the use of supplementary modes improve or degrade the convergence of the computed solution? Surprisingly, although the supplementary modes degrade the behaviour of the solution at the walls, they improve the convergence of the wavefield and of the scattering coefficients. This sheds a new light on the role of the supplementary modes and opens the way for their use in a wide range of scattering problems. PMID:24910524

  14. Applications of the Petri net to simulate, test, and validate the performance and safety of complex, heterogeneous, multi-modality patient monitoring alarm systems.

    PubMed

    Sloane, E B; Gelhot, V

    2004-01-01

    This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected. PMID:17271039

  15. Age-related changes in the structure and function of prefrontal cortex-amygdala circuitry in children and adolescents: a multi-modal imaging approach.

    PubMed

    Swartz, Johnna R; Carrasco, Melisa; Wiggins, Jillian Lee; Thomason, Moriah E; Monk, Christopher S

    2014-02-01

    The uncinate fasciculus is a major white matter tract that provides a crucial link between areas of the human brain that underlie emotion processing and regulation. Specifically, the uncinate fasciculus is the major direct fiber tract that connects the prefrontal cortex and the amygdala. The aim of the present study was to use a multi-modal imaging approach in order to simultaneously examine the relation between structural connectivity of the uncinate fasciculus and functional activation of the amygdala in a youth sample (children and adolescents). Participants were 9 to 19years old and underwent diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI). Results indicate that greater structural connectivity of the uncinate fasciculus predicts reduced amygdala activation to sad and happy faces. This effect is moderated by age, with younger participants exhibiting a stronger relation. Further, decreased amygdala activation to sad faces predicts lower internalizing symptoms. These results provide important insights into brain structure-function relationships during adolescence, and suggest that greater structural connectivity of the uncinate fasciculus may facilitate regulation of the amygdala, particularly during early adolescence. These findings also have implications for understanding the relation between brain structure, function, and the development of emotion regulation difficulties, such as internalizing symptoms. PMID:23959199

  16. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  17. Propagation in waveguides with varying cross section and curvature: a new light on the role of supplementary modes in multi-modal methods

    PubMed Central

    Maurel, Agnès; Mercier, Jean-François; Félix, Simon

    2014-01-01

    We present an efficient multi-modal method to describe the acoustic propagation in waveguides with varying curvature and cross section. A key feature is the use of a flexible geometrical transformation to a virtual space in which the waveguide is straight and has unitary cross section. In this new space, the pressure field has to satisfy a modified wave equation and associated modified boundary conditions. These boundary conditions are in general not satisfied by the Neumann modes, used for the series representation of the field. Following previous work, an improved modal method (MM) is presented, by means of the use of two supplementary modes. Resulting increased convergences are exemplified by comparison with the classical MM. Next, the following question is addressed: when the boundary conditions are verified by the Neumann modes, does the use of supplementary modes improve or degrade the convergence of the computed solution? Surprisingly, although the supplementary modes degrade the behaviour of the solution at the walls, they improve the convergence of the wavefield and of the scattering coefficients. This sheds a new light on the role of the supplementary modes and opens the way for their use in a wide range of scattering problems. PMID:24910524

  18. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    PubMed Central

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  19. Multi-modality PET-CT imaging of breast cancer in an animal model using nanoparticle x-ray contrast agent and 18F-FDG

    NASA Astrophysics Data System (ADS)

    Badea, C. T.; Ghaghada, K.; Espinosa, G.; Strong, L.; Annapragada, A.

    2011-03-01

    Multi-modality PET-CT imaging is playing an important role in the field of oncology. While PET imaging facilitates functional interrogation of tumor status, the use of CT imaging is primarily limited to anatomical reference. In an attempt to extract comprehensive information about tumor cells and its microenvironment, we used a nanoparticle xray contrast agent to image tumor vasculature and vessel 'leakiness' and 18F-FDG to investigate the metabolic status of tumor cells. In vivo PET/CT studies were performed in mice implanted with 4T1 mammary breast cancer cells.Early-phase micro-CT imaging enabled visualization 3D vascular architecture of the tumors whereas delayedphase micro-CT demonstrated highly permeable vessels as evident by nanoparticle accumulation within the tumor. Both imaging modalities demonstrated the presence of a necrotic core as indicated by a hypo-enhanced region in the center of the tumor. At early time-points, the CT-derived fractional blood volume did not correlate with 18F-FDG uptake. At delayed time-points, the tumor enhancement in 18F-FDG micro-PET images correlated with the delayed signal enhanced due to nanoparticle extravasation seen in CT images. The proposed hybrid imaging approach could be used to better understand tumor angiogenesis and to be the basis for monitoring and evaluating anti-angiogenic and nano-chemotherapies.

  20. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  1. Nucleophosmin integrates within the nucleolus via multi-modal interactions with proteins displaying R-rich linear motifs and rRNA.

    PubMed

    Mitrea, Diana M; Cika, Jaclyn A; Guy, Clifford S; Ban, David; Banerjee, Priya R; Stanley, Christopher B; Nourse, Amanda; Deniz, Ashok A; Kriwacki, Richard W

    2016-01-01

    The nucleolus is a membrane-less organelle formed through liquid-liquid phase separation of its components from the surrounding nucleoplasm. Here, we show that nucleophosmin (NPM1) integrates within the nucleolus via a multi-modal mechanism involving multivalent interactions with proteins containing arginine-rich linear motifs (R-motifs) and ribosomal RNA (rRNA). Importantly, these R-motifs are found in canonical nucleolar localization signals. Based on a novel combination of biophysical approaches, we propose a model for the molecular organization within liquid-like droplets formed by the N-terminal domain of NPM1 and R-motif peptides, thus providing insights into the structural organization of the nucleolus. We identify multivalency of acidic tracts and folded nucleic acid binding domains, mediated by N-terminal domain oligomerization, as structural features required for phase separation of NPM1 with other nucleolar components in vitro and for localization within mammalian nucleoli. We propose that one mechanism of nucleolar localization involves phase separation of proteins within the nucleolus. PMID:26836305

  2. Simulation of the expected performance of INSERT: A new multi-modality SPECT/MRI system for preclinical and clinical imaging

    NASA Astrophysics Data System (ADS)

    Busca, P.; Fiorini, C.; Butt, A. D.; Occhipinti, M.; Peloso, R.; Quaglia, R.; Schembari, F.; Trigilio, P.; Nemeth, G.; Major, P.; Erlandsson, K.; Hutton, B. F.

    2014-01-01

    A new multi-modality imaging tool is under development in the framework of the INSERT (INtegrated SPECT/MRI for Enhanced Stratification in Radio-chemo Therapy) project, supported by the European Community. The final goal is to develop a custom SPECT apparatus, that can be used as an insert for commercially available MRI systems such as 3 T MRI with 59 cm bore diameter. INSERT is expected to offer more effective and earlier diagnosis with potentially better outcome in survival for the treatment of brain tumors, primarily glioma. Two SPECT prototypes will be developed, one dedicated to preclinical imaging, the second one dedicated to clinical imaging. The basic building block of the SPECT detector ring is a small 5 cm×5 cm gamma camera, based on the well-established Anger architecture with a continuous scintillator readout by an array of silicon photodetectors. Silicon Drift Detectors (SDDs) and Silicon PhotoMultipliers (SiPM) are being considered as possible scintillator readout, considering that the detector choice plays a predominant role for the final performance of the system, such as energy and spatial resolution, as well as the useful field of view of the camera. Both solutions are therefore under study to evaluate their performances in terms of field of view (FOV), spatial and energy resolution. Preliminary simulations for both the preclinical and clinical systems have been carried out to evaluate resolution and sensitivity.

  3. Evidence-based development and first usability testing of a social serious game based multi-modal system for early screening for atypical socio-cognitive development.

    PubMed

    Gyori, Miklos; Borsos, Zsófia; Stefanik, Krisztina

    2015-01-01

    At current, screening for, and diagnosis of, autism spectrum disorders (ASD) are based on purely behavioral data; established screening tools rely on human observation and ratings of relevant behaviors. The research and development project in the focus of this paper is aimed at designing, creating and evaluating a social serious game based multi-modal, interactive software system for screening for high functioning cases of ASD at kindergarten age. The aims of this paper are (1) to summarize the evidence-based design process and (2) to present results from the first usability test of the system. Game topic, candidate responses, and candidate game contents were identified via an iterative literature review. On this basis, the 1st partial prototype of the fully playable game has been created, with complete data recording functionality but without the decision making component. A first usability test was carried out on this prototype (n=13). Overall results were unambiguously promising. Although sporadic difficulties in, and slightly negative attitudes towards, using the game occasionally arose, these were confined to non-target-group children only. The next steps of development include (1) completing the game design; (2) carrying out first large-n field test; (3) creating the first prototype of the decision making component. PMID:26294452

  4. A sub-10 nA DC-balanced adaptive stimulator IC with multi-modal sensor for compact electro-acupuncture stimulation.

    PubMed

    Song, Kiseok; Lee, Hyungwoo; Hong, Sunjoo; Cho, Hyunwoo; Ha, Unsoo; Yoo, Hoi-Jun

    2012-12-01

    A compact electro-acupuncture (EA) system is proposed for a multi-modal feedback EA treatment. It is composed of a needle, a compact EA patch, and an interconnecting conductive thread. The 3 cm diameter compact EA patch is implemented with an adaptive stimulator IC and a small coin battery on the planar-fashionable circuit board (P-FCB) technology. The adaptive stimulator IC can form a closed current loop for even a single needle, and measure the electromyography (EMG) and the skin temperature to analyze the stimulation status as well as supply programmable stimulation current (40 μA-1 mA) with 5 different modes. The large time constant (LTC) sample and hold (S/H) current matching technique achieves the high-precision charge balancing ( <;10 nA) for the patient's safety. The measured data can be wirelessly transmitted to the external EA analyzer through the body channel communication (BCC) transceiver for the low power consumption. The external EA analyzer can show the patient's status, such as the muscle fatigue and the change of the skin temperature. Based on these analyses, the practitioner can adaptively change the stimulation parameters for the optimal treatment value. A 12.5 mm(2) 0.13 μm RF CMOS stimulator chip consumes 6.8 mW at 1.2 V supporting 32 different current levels. The proposed compact EA system is fully implemented and tested on the human body. PMID:23853254

  5. Nucleophosmin integrates within the nucleolus via multi-modal interactions with proteins displaying R-rich linear motifs and rRNA

    DOE PAGESBeta

    Mitrea, Diana M.; Cika, Jaclyn A.; Guy, Clifford S.; Ban, David; Banerjee, Priya R.; Stanley, Christopher B.; Nourse, Amanda; Deniz, Ashok A.; Kriwacki, Richard W.

    2016-02-02

    The nucleolus is a membrane-less organelle formed through liquid-liquid phase separation of its components from the surrounding nucleoplasm. Here, we show that nucleophosmin (NPM1) integrates within the nucleolus via a multi-modal mechanism involving multivalent interactions with proteins containing arginine-rich linear motifs (R-motifs) and ribosomal RNA (rRNA). Importantly, these R-motifs are found in canonical nucleolar localization signals. Based on a novel combination of biophysical approaches, we propose a model for the molecular organization within liquid-like droplets formed by the N-terminal domain of NPM1 and R-motif peptides, thus providing insights into the structural organization of the nucleolus. We identify multivalency of acidicmore » tracts and folded nucleic acid binding domains, mediated by N-terminal domain oligomerization, as structural features required for phase separation of NPM1 with other nucleolar components in vitro and for localization within mammalian nucleoli. We propose that one mechanism of nucleolar localization involves phase separation of proteins within the nucleolus.« less

  6. Evaluation of Open-Source Hard Real Time Software Packages

    NASA Technical Reports Server (NTRS)

    Mattei, Nicholas S.

    2004-01-01

    replacing this somewhat costly implementation is the focus of one of the SA group s current research projects. The explosion of open source software in the last ten years has led to the development of a multitude of software solutions which were once only produced by major corporations. The benefits of these open projects include faster release and bug patching cycles as well as inexpensive if not free software solutions. The main packages for hard real time solutions under Linux are Real Time Application Interface (RTAI) and two varieties of Real Time Linux (RTL), RTLFree and RTLPro. During my time here at NASA I have been testing various hard real time solutions operating as layers on the Linux Operating System. All testing is being run on an Intel SBC 2590 which is a common embedded hardware platform. The test plan was provided to me by the Software Assurance group at the start of my internship and my job has been to test the systems by developing and executing the test cases on the hardware. These tests are constructed so that the Software Assurance group can get hard test data for a comparison between the open source and proprietary implementations of hard real time solutions.

  7. Free and Open Source Software for land degradation vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Imbrenda, Vito; Calamita, Giuseppe; Coluzzi, Rosa; D'Emilio, Mariagrazia; Lanfredi, Maria Teresa; Perrone, Angela; Ragosta, Maria; Simoniello, Tiziana

    2013-04-01

    the vulnerability to anthropic factors mainly connected with agricultural and grazing management. To achieve the final ESAs Index depicting the overall vulnerability to degradation of the investigated area we applied the geometric mean to cross normalized indices related to each examined component. In this context QGIS was used to display data and to perform basic GIS calculations, whereas GRASS was used for map-algebra operations and image processing. Finally R was used for computing statistical analysis (Principal Component Analysis) aimed to determine the relative importance of each adopted indicator. Our results show that GRASS, QGIS and R software are suitable to map land degradation vulnerability and identify highly vulnerable areas in which rehabilitation/recovery interventions are urgent. In addition they allow us to put into evidence the most important drivers of degradation thus supplying basic information for the setting up of intervention strategies. Ultimately, Free Open Source Software deliver a fair chance for geoscientific investigations thanks to their high interoperability and flexibility enabling to preserve the accuracy of the data and to reduce processing time. Moreover, the presence of several communities that steadily support users allows for achieving high quality results, making free open source software a valuable and easy alternative to conventional commercial software.

  8. Automatic Image Registration Using Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the process, as it involves human interaction, does not converge with multiple operations at different times. Automated procedures rely on accurately determining the matching locations or points from both the images under comparison and the procedures are robust and consistent over time. Different algorithms are available to achieve this, based on pattern recognition, feature based detection, similarity techniques etc. In the present study and implementation, Correlation based methods have been used with a improvement over newly developed technique of identifying and pruning the false points of match. Free and Open Source Software (FOSS) have been used to develop the methodology to reach a wider audience, without any dependency on COTS (Commercially off the shelf) software. Standard deviation from foci of the ellipse of correlated points, is a statistical means of ensuring the best match of the points of interest based on both intensity values and location correspondence. The methodology is developed and standardised by enhancements to meet the registration requirements of remote sensing imagery. Results have shown a performance improvement, nearly matching the visual techniques and have been implemented in remote sensing operational projects. The main advantage of the proposed methodology is its viability in production mode environment. This paper also shows that the visualization capabilities of MapWinGIS, GDAL's image handling abilities and OSSIM's correlation facility can be efficiently

  9. Learning from open source software projects to improve scientific review

    PubMed Central

    Ghosh, Satrajit S.; Klein, Arno; Avants, Brian; Millman, K. Jarrod

    2012-01-01

    Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers. PMID:22529798

  10. Learning from open source software projects to improve scientific review.

    PubMed

    Ghosh, Satrajit S; Klein, Arno; Avants, Brian; Millman, K Jarrod

    2012-01-01

    Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers. PMID:22529798

  11. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  12. An open-source chemical kinetics network: VULCAN

    NASA Astrophysics Data System (ADS)

    Tsai, Shang-Min; Lyons, James; Heng, Kevin

    2015-12-01

    I will present VULCAN, an open-source 1D chemical kinetics code suited for the temperature and pressure range relevant to observable exoplanet atmospheres. The chemical network is based on a set of reduced rate coefficients for C-H-O systems. Most of the rate coefficients are based on the NIST online database, and validated by comparing withthermodynamic equilibrium codes (TEA, STANJAN). The difference between the experimental rates and those from the thermodynamical data is carefully examined and discussed. For the numerical method, a simple, quick, semi-implicit Euler integrator is adopted to solve the stiff chemical reactions, within an operator-splitting scheme for computational efficiency.Several test runs of VULCAN are shown in a hierarchical way: pure H, H+O, H+O+C, including controlled experiments performed with a simple analytical temperature-pressure profiles, so that different parameters, such as the stellar irradiation, atmospheric opacities and albedo can be individually explored to understand how these properties affect the temperaturestructure and hence the chemical abundances. I will also revisit the "transport-induced-quenching” effects, and discuss the limitation of this approximation and its impact on observations. Finally, I will discuss the effects of C/O ratio and compare with published work in the literature.VULCAN is written in Python and is part of the publicly-available set of community tools we call the Exoclimes Simulation Platform (ESP; www.exoclime.org). I am a Ph.D student of Kevin Heng at the University of Bern, Switzerland.

  13. wradlib - An Open Source Library for Weather Radar Data Processing

    NASA Astrophysics Data System (ADS)

    Heistermann, M.; Pfaff, Th.; Jacobi, S.

    2012-04-01

    Weather radar data is potentially useful in meteorology, hydrology, disaster prevention and mitigation. Its ability to provide information on precipitation with high spatial and temporal resolution over large areas makes it an invaluable tool for short term weather forecasting or flash flood forecasting. The indirect method of measuring the precipitation field, however, leads to a significant number of data artifacts, which usually must be removed or dealt with before the data can be used with acceptable quality. Data processing requires e.g. the transformation of measurements from polar to cartesian coordinates and from reflectivity to rainfall intensity, the composition of data from several radar sites in a common grid, clutter identification and removal, attenuation and VPR corrections, gauge adjustment and visualization. The complexity of these processing steps is a major obstacle for many potential users in science and practice. Adequate tools are available either only at significant costs with no access to the uncerlying source code, or they are incomplete, insufficiently documented and intransparent. The wradlib project has been initiated in order to lower the barrier for potential users of weather radar data in the geosciences and to provide a common platform for research on new algorithms. wradlib is an open source library for the full range of weather radar related processing algorithms, which is well documented and easy to use. The main parts of the library are currently implemented in the python programming language. Python is well known both for its ease of use as well as its ability to integrate code written in other programming languages like Fortran or C/C++. The well established Numpy and Scipy packages are used to provide decent performance for pure Python implementations of algorithms. We welcome contributions written in any computer language and will try to make them accessible from Python. We would like to present the current state of this

  14. Acquire: an open-source comprehensive cancer biobanking system

    PubMed Central

    Dowst, Heidi; Pew, Benjamin; Watkins, Chris; McOwiti, Apollo; Barney, Jonathan; Qu, Shijing; Becnel, Lauren B.

    2015-01-01

    Motivation: The probability of effective treatment of cancer with a targeted therapeutic can be improved for patients with defined genotypes containing actionable mutations. To this end, many human cancer biobanks are integrating more tightly with genomic sequencing facilities and with those creating and maintaining patient-derived xenografts (PDX) and cell lines to provide renewable resources for translational research. Results: To support the complex data management needs and workflows of several such biobanks, we developed Acquire. It is a robust, secure, web-based, database-backed open-source system that supports all major needs of a modern cancer biobank. Its modules allow for i) up-to-the-minute ‘scoreboard’ and graphical reporting of collections; ii) end user roles and permissions; iii) specimen inventory through caTissue Suite; iv) shipping forms for distribution of specimens to pathology, genomic analysis and PDX/cell line creation facilities; v) robust ad hoc querying; vi) molecular and cellular quality control metrics to track specimens’ progress and quality; vii) public researcher request; viii) resource allocation committee distribution request review and oversight and ix) linkage to available derivatives of specimen. Availability and Implementation: Acquire implements standard controlled vocabularies, ontologies and objects from the NCI, CDISC and others. Here we describe the functionality of the system, its technological stack and the processes it supports. A test version Acquire is available at https://tcrbacquire-stg.research.bcm.edu; software is available in https://github.com/BCM-DLDCC/Acquire; and UML models, data and workflow diagrams, behavioral specifications and other documents are available at https://github.com/BCM-DLDCC/Acquire/tree/master/supplementaryMaterials. Contact: becnel@bcm.edu PMID:25573920

  15. An open source lower limb model: Hip joint validation.

    PubMed

    Modenese, L; Phillips, A T M; Bull, A M J

    2011-08-11

    Musculoskeletal lower limb models have been shown to be able to predict hip contact forces (HCFs) that are comparable to in vivo measurements obtained from instrumented prostheses. However, the muscle recruitment predicted by these models does not necessarily compare well to measured electromyographic (EMG) signals. In order to verify if it is possible to accurately estimate HCFs from muscle force patterns consistent with EMG measurements, a lower limb model based on a published anatomical dataset (Klein Horsman et al., 2007. Clinical Biomechanics. 22, 239-247) has been implemented in the open source software OpenSim. A cycle-to-cycle hip joint validation was conducted against HCFs recorded during gait and stair climbing trials of four arthroplasty patients (Bergmann et al., 2001. Journal of Biomechanics. 34, 859-871). Hip joint muscle tensions were estimated by minimizing a polynomial function of the muscle forces. The resulting muscle activation patterns obtained by assessing multiple powers of the objective function were compared against EMG profiles from the literature. Calculated HCFs denoted a tendency to monotonically increase their magnitude when raising the power of the objective function; the best estimation obtained from muscle forces consistent with experimental EMG profiles was found when a quadratic objective function was minimized (average overestimation at experimental peak frame: 10.1% for walking, 7.8% for stair climbing). The lower limb model can produce appropriate balanced sets of muscle forces and joint contact forces that can be used in a range of applications requiring accurate quantification of both. The developed model is available at the website https://simtk.org/home/low_limb_london. PMID:21742331

  16. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  17. Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2007-01-01

    This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…

  18. An Evaluation of Open Source Learning Management Systems According to Administration Tools and Curriculum Design

    ERIC Educational Resources Information Center

    Ozdamli, Fezile

    2007-01-01

    Distance education is becoming more important in the universities and schools. The aim of this research is to evaluate the current existing Open Source Learning Management Systems according to Administration tool and Curriculum Design. For this, seventy two Open Source Learning Management Systems have been subjected to a general evaluation. After…

  19. Evaluating Open Source Software for Use in Library Initiatives: A Case Study Involving Electronic Publishing

    ERIC Educational Resources Information Center

    Samuels, Ruth Gallegos; Griffy, Henry

    2012-01-01

    This article discusses best practices for evaluating open source software for use in library projects, based on the authors' experience evaluating electronic publishing solutions. First, it presents a brief review of the literature, emphasizing the need to evaluate open source solutions carefully in order to minimize Total Cost of Ownership. Next,…

  20. Open Source Software Development and Lotka's Law: Bibliometric Patterns in Programming.

    ERIC Educational Resources Information Center

    Newby, Gregory B.; Greenberg, Jane; Jones, Paul

    2003-01-01

    Applies Lotka's Law to metadata on open source software development. Authoring patterns found in software development productivity are found to be comparable to prior studies of Lotka's Law for scientific and scholarly publishing, and offer promise in predicting aggregate behavior of open source developers. (Author/LRW)