Science.gov

Sample records for open-source cross-platform multi-modal

  1. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  2. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data.

  3. Graphical model based multivariate analysis (GAMMA): an open-source, cross-platform neuroimaging data analysis software package.

    PubMed

    Chen, Rong; Herskovits, Edward H

    2012-04-01

    The GAMMA suite is an open-source, cross-platform data-mining software package designed to analyze neuroimaging data. Analyzing brain image volumes is a very challenging problem, due to undersampling and the potential for multivariate nonlinear interactions among variables. The GAMMA suite provides a set of tools to facilitate the analysis of neuroimaging data.

  4. OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis

    NASA Astrophysics Data System (ADS)

    Grohmann, C. H.; Campanha, G. A.

    2010-12-01

    Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of

  5. A new, open-source, multi-modality digital breast phantom

    NASA Astrophysics Data System (ADS)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  6. An open-source and cross-platform framework for Brain Computer Interface-guided robotic arm control

    PubMed Central

    Kubben, Pieter L.; Pouratian, Nader

    2012-01-01

    Brain Computer Interfaces (BCIs) have focused on several areas, of which motor substitution has received particular interest. Whereas open-source BCI software is available to facilitate cost-effective collaboration between research groups, it mainly focuses on communication and computer control. We developed an open-source and cross-platform framework, which works with cost-effective equipment that allows researchers to enter the field of BCI-based motor substitution without major investments upfront. It is based on the C++ programming language and the Qt framework, and offers a separate class for custom MATLAB/Simulink scripts. It has been tested using a 14-channel wireless electroencephalography (EEG) device and a low-cost robotic arm that offers 5° of freedom. The software contains four modules to control the robotic arm, one of which receives input from the EEG device. Strengths, current limitations, and future developments will be discussed. PMID:23372966

  7. An open-source and cross-platform framework for Brain Computer Interface-guided robotic arm control.

    PubMed

    Kubben, Pieter L; Pouratian, Nader

    2012-01-01

    Brain Computer Interfaces (BCIs) have focused on several areas, of which motor substitution has received particular interest. Whereas open-source BCI software is available to facilitate cost-effective collaboration between research groups, it mainly focuses on communication and computer control. We developed an open-source and cross-platform framework, which works with cost-effective equipment that allows researchers to enter the field of BCI-based motor substitution without major investments upfront. It is based on the C++ programming language and the Qt framework, and offers a separate class for custom MATLAB/Simulink scripts. It has been tested using a 14-channel wireless electroencephalography (EEG) device and a low-cost robotic arm that offers 5° of freedom. The software contains four modules to control the robotic arm, one of which receives input from the EEG device. Strengths, current limitations, and future developments will be discussed.

  8. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    PubMed

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  9. OpenChrom: a cross-platform open source software for the mass spectrometric analysis of chromatographic data

    PubMed Central

    2010-01-01

    Background Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. Results This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. Conclusions OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net. PMID:20673335

  10. GeolOkit 1.0: a new Open Source, Cross-Platform software for geological data visualization in Google Earth environment

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Antoine; Bastin, Christophe; Watlet, Arnaud

    2016-04-01

    GIS software suites are today's essential tools to gather and visualise geological data, to apply spatial and temporal analysis and in fine, to create and share interactive maps for further geosciences' investigations. For these purposes, we developed GeolOkit: an open-source, freeware and lightweight software, written in Python, a high-level, cross-platform programming language. GeolOkit software is accessible through a graphical user interface, designed to run in parallel with Google Earth. It is a super user-friendly toolbox that allows 'geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to plot these one into Google Earth environment using KML code. This workflow requires no need of any third party software, except Google Earth itself. GeolOkit comes with large number of geosciences' labels, symbols, colours and placemarks and may process : (i) multi-points data, (ii) contours via several interpolations methods, (iii) discrete planar and linear structural data in 2D or 3D supporting large range of structures input format, (iv) clustered stereonets and rose diagram, (v) drawn cross-sections as vertical sections, (vi) georeferenced maps and vectors, (vii) field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS. We are looking for you to discover all the functionalities of GeolOkit software. As this project is under development, we are definitely looking to discussions regarding your proper needs, your ideas and contributions to GeolOkit project.

  11. Multi-Modality Phantom Development

    SciTech Connect

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe both our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.

  12. Multi-Modal Interaction for Robotic Mules

    DTIC Science & Technology

    2014-02-26

    gaze , hand and arm signals, and fine- grained finger movements. Gesture- based interaction with computers ...evaluation, and a prototype multi - modal interface that can be used to command a robotic platform. 15. SUBJECT TERMS multi - modal interaction , human -robot... Multi - Modal Interaction for Robotic Mules Glenn Taylor, Mike Quist, Matt Lanting, Cory Dunham, Patrick Theisen, Paul Muench

  13. UAS Cross Platform JTA

    DTIC Science & Technology

    2014-07-18

    Naval Medical Research Unit Dayton UAS CROSS PLATFORM JTA FINAL REPORT MANGOS , VINCENZI, SHRADER, WILLIAMS, ARNOLD NAMRU-D REPORT NUMBER... Mangos , Vincenzi, Shrader, Williams, Arnold 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Medical Research Unit Dayton 2624 Q Street, Bldg...units for over-the-hill, real-time direct situational awareness for combat support and target information. The “B” variant began production in 2006

  14. Open Source Vision

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.

  15. Creating Open Source Conversation

    ERIC Educational Resources Information Center

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  16. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  17. Creating Open Source Conversation

    ERIC Educational Resources Information Center

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  18. Open Source Vision

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.

  19. mDCC_tools: characterizing multi-modal atomic motions in molecular dynamics trajectories.

    PubMed

    Kasahara, Kota; Mohan, Neetha; Fukuda, Ikuo; Nakamura, Haruki

    2016-08-15

    We previously reported the multi-modal Dynamic Cross Correlation (mDCC) method for analyzing molecular dynamics trajectories. This method quantifies the correlation coefficients of atomic motions with complex multi-modal behaviors by using a Bayesian-based pattern recognition technique that can effectively capture transiently formed, unstable interactions. Here, we present an open source toolkit for performing the mDCC analysis, including pattern recognitions, complex network analyses and visualizations. We include a tutorial document that thoroughly explains how to apply this toolkit for an analysis, using the example trajectory of the 100 ns simulation of an engineered endothelin-1 peptide dimer. The source code is available for free at http://www.protein.osaka-u.ac.jp/rcsfp/pi/mdcctools/, implemented in C ++ and Python, and supported on Linux. kota.kasahara@protein.osaka-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. mDCC_tools: characterizing multi-modal atomic motions in molecular dynamics trajectories

    PubMed Central

    Kasahara, Kota; Mohan, Neetha; Fukuda, Ikuo; Nakamura, Haruki

    2016-01-01

    Summary: We previously reported the multi-modal Dynamic Cross Correlation (mDCC) method for analyzing molecular dynamics trajectories. This method quantifies the correlation coefficients of atomic motions with complex multi-modal behaviors by using a Bayesian-based pattern recognition technique that can effectively capture transiently formed, unstable interactions. Here, we present an open source toolkit for performing the mDCC analysis, including pattern recognitions, complex network analyses and visualizations. We include a tutorial document that thoroughly explains how to apply this toolkit for an analysis, using the example trajectory of the 100 ns simulation of an engineered endothelin-1 peptide dimer. Availability and implementation: The source code is available for free at http://www.protein.osaka-u.ac.jp/rcsfp/pi/mdcctools/, implemented in C ++ and Python, and supported on Linux. Contact: kota.kasahara@protein.osaka-u.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153575

  1. Multi Modal Anticipation in Fuzzy Space

    NASA Astrophysics Data System (ADS)

    Asproth, Viveca; Holmberg, Stig C.; Hâkansson, Anita

    2006-06-01

    We are all stakeholders in the geographical space, which makes up our common living and activity space. This means that a careful, creative, and anticipatory planning, design, and management of that space will be of paramount importance for our sustained life on earth. Here it is shown that the quality of such planning could be significantly increased with help of a computer based modelling and simulation tool. Further, the design and implementation of such a tool ought to be guided by the conceptual integration of some core concepts like anticipation and retardation, multi modal system modelling, fuzzy space modelling, and multi actor interaction.

  2. Quantitative multi-modal NDT data analysis

    SciTech Connect

    Heideklang, René; Shokouhi, Parisa

    2014-02-18

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.

  3. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Evaluating Open Source Portals

    ERIC Educational Resources Information Center

    Goh, Dion; Luyt, Brendan; Chua, Alton; Yee, See-Yong; Poh, Kia-Ngoh; Ng, How-Yeu

    2008-01-01

    Portals have become indispensable for organizations of all types trying to establish themselves on the Web. Unfortunately, there have only been a few evaluative studies of portal software and even fewer of open source portal software. This study aims to add to the available literature in this important area by proposing and testing a checklist for…

  5. Open Source in Education

    ERIC Educational Resources Information Center

    Lakhan, Shaheen E.; Jhunjhunwala, Kavita

    2008-01-01

    Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…

  6. Open-Source Colorimeter

    PubMed Central

    Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.

    2013-01-01

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032

  7. Evaluating Open Source Portals

    ERIC Educational Resources Information Center

    Goh, Dion; Luyt, Brendan; Chua, Alton; Yee, See-Yong; Poh, Kia-Ngoh; Ng, How-Yeu

    2008-01-01

    Portals have become indispensable for organizations of all types trying to establish themselves on the Web. Unfortunately, there have only been a few evaluative studies of portal software and even fewer of open source portal software. This study aims to add to the available literature in this important area by proposing and testing a checklist for…

  8. Open Source in Education

    ERIC Educational Resources Information Center

    Lakhan, Shaheen E.; Jhunjhunwala, Kavita

    2008-01-01

    Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…

  9. Open Source Software Development

    DTIC Science & Technology

    2011-01-01

    Agency’s XMM-Newton Observatory, the Sloan Digital Sky Survey, and others. These are three highly visible astrophysics research projects whose...In scientific fields like astrophysics that critically depend on software, open source is considered an essential precondition for research to...space are made, this in turn often leads to modification, extension, and new versions of the astronomical software in use that enable astrophysical

  10. Open-Source GIS

    SciTech Connect

    Vatsavai, Raju; Burk, Thomas E; Lime, Steve

    2012-01-01

    The components making up an Open Source GIS are explained in this chapter. A map server (Sect. 30.1) can broadly be defined as a software platform for dynamically generating spatially referenced digital map products. The University of Minnesota MapServer (UMN Map Server) is one such system. Its basic features are visualization, overlay, and query. Section 30.2 names and explains many of the geospatial open source libraries, such as GDAL and OGR. The other libraries are FDO, JTS, GEOS, JCS, MetaCRS, and GPSBabel. The application examples include derived GIS-software and data format conversions. Quantum GIS, its origin and its applications explained in detail in Sect. 30.3. The features include a rich GUI, attribute tables, vector symbols, labeling, editing functions, projections, georeferencing, GPS support, analysis, and Web Map Server functionality. Future developments will address mobile applications, 3-D, and multithreading. The origins of PostgreSQL are outlined and PostGIS discussed in detail in Sect. 30.4. It extends PostgreSQL by implementing the Simple Feature standard. Section 30.5 details the most important open source licenses such as the GPL, the LGPL, the MIT License, and the BSD License, as well as the role of the Creative Commons.

  11. Open source posturography.

    PubMed

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  12. Multi-modality molecular imaging for gastric cancer research

    NASA Astrophysics Data System (ADS)

    Liang, Jimin; Chen, Xueli; Liu, Junting; Hu, Hao; Qu, Xiaochao; Wang, Fu; Nie, Yongzhan

    2011-12-01

    Because of the ability of integrating the strengths of different modalities and providing fully integrated information, multi-modality molecular imaging techniques provide an excellent solution to detecting and diagnosing earlier cancer, which remains difficult to achieve by using the existing techniques. In this paper, we present an overview of our research efforts on the development of the optical imaging-centric multi-modality molecular imaging platform, including the development of the imaging system, reconstruction algorithms and preclinical biomedical applications. Primary biomedical results show that the developed optical imaging-centric multi-modality molecular imaging platform may provide great potential in the preclinical biomedical applications and future clinical translation.

  13. How Is Open Source Special?

    ERIC Educational Resources Information Center

    Kapor, Mitchell

    2005-01-01

    Open source software projects involve the production of goods, but in software projects, the "goods" consist of information. The open source model is an alternative to the conventional centralized, command-and-control way in which things are usually made. In contrast, open source projects are genuinely decentralized and transparent. Transparent…

  14. PR-PR: cross-platform laboratory automation system.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J

    2014-08-15

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  15. PR-PR: Cross-Platform Laboratory Automation System

    SciTech Connect

    Linshiz, G; Stawski, N; Goyal, G; Bi, CH; Poust, S; Sharma, M; Mutalik, V; Keasling, JD; Hillson, NJ

    2014-08-01

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  16. Multi-modality neuro-monitoring: conventional clinical trial design.

    PubMed

    Georgiadis, Alexandros L; Palesch, Yuko Y; Zygun, David; Hemphill, J Claude; Robertson, Claudia S; Leroux, Peter D; Suarez, Jose I

    2015-06-01

    Multi-modal monitoring has become an integral part of neurointensive care. However, our approach is at this time neither standardized nor backed by data from randomized controlled trials. The goal of the second Neurocritical Care Research Conference was to discuss research priorities in multi-modal monitoring, what research tools are available, as well as the latest advances in clinical trial design. This section of the meeting was focused on how such a trial should be designed so as to maximize yield and avoid mistakes of the past.

  17. Cross platform development using Delphi and Kylix

    SciTech Connect

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  18. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http

  19. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  20. Multi-modal locomotion: from animal to application.

    PubMed

    Lock, R J; Burgess, S C; Vaidyanathan, R

    2014-03-01

    The majority of robotic vehicles that can be found today are bound to operations within a single media (i.e. land, air or water). This is very rarely the case when considering locomotive capabilities in natural systems. Utility for small robots often reflects the exact same problem domain as small animals, hence providing numerous avenues for biological inspiration. This paper begins to investigate the various modes of locomotion adopted by different genus groups in multiple media as an initial attempt to determine the compromise in ability adopted by the animals when achieving multi-modal locomotion. A review of current biologically inspired multi-modal robots is also presented. The primary aim of this research is to lay the foundation for a generation of vehicles capable of multi-modal locomotion, allowing ambulatory abilities in more than one media, surpassing current capabilities. By identifying and understanding when natural systems use specific locomotion mechanisms, when they opt for disparate mechanisms for each mode of locomotion rather than using a synergized singular mechanism, and how this affects their capability in each medium, similar combinations can be used as inspiration for future multi-modal biologically inspired robotic platforms.

  1. Utilizing Multi-Modal Literacies in Middle Grades Science

    ERIC Educational Resources Information Center

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  2. An Open Source Simulation System

    NASA Technical Reports Server (NTRS)

    Slack, Thomas

    2005-01-01

    An investigation into the current state of the art of open source real time programming practices. This document includes what technologies are available, how easy is it to obtain, configure, and use them, and some performance measures done on the different systems. A matrix of vendors and their products is included as part of this investigation, but this is not an exhaustive list, and represents only a snapshot of time in a field that is changing rapidly. Specifically, there are three approaches investigated: 1. Completely open source on generic hardware, downloaded from the net. 2. Open source packaged by a vender and provided as free evaluation copy. 3. Proprietary hardware with pre-loaded proprietary source available software provided by the vender as for our evaluation.

  3. A bioinspired multi-modal flying and walking robot.

    PubMed

    Daler, Ludovic; Mintchev, Stefano; Stefanini, Cesare; Floreano, Dario

    2015-01-19

    With the aim to extend the versatility and adaptability of robots in complex environments, a novel multi-modal flying and walking robot is presented. The robot consists of a flying wing with adaptive morphology that can perform both long distance flight and walking in cluttered environments for local exploration. The robot's design is inspired by the common vampire bat Desmodus rotundus, which can perform aerial and terrestrial locomotion with limited trade-offs. Wings' adaptive morphology allows the robot to modify the shape of its body in order to increase its efficiency during terrestrial locomotion. Furthermore, aerial and terrestrial capabilities are powered by a single locomotor apparatus, therefore it reduces the total complexity and weight of this multi-modal robot.

  4. Combining Multi-modal Features for Social Media Analysis

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Spiros; Giannakidou, Eirini; Kompatsiaris, Ioannis; Patras, Ioannis; Vakali, Athena

    In this chapter we discuss methods for efficiently modeling the diverse information carried by social media. The problem is viewed as a multi-modal analysis process where specialized techniques are used to overcome the obstacles arising from the heterogeneity of data. Focusing at the optimal combination of low-level features (i.e., early fusion), we present a bio-inspired algorithm for feature selection that weights the features based on their appropriateness to represent a resource. Under the same objective of optimal feature combination we also examine the use of pLSA-based aspect models, as the means to define a latent semantic space where heterogeneous types of information can be effectively combined. Tagged images taken from social sites have been used in the characteristic scenarios of image clustering and retrieval, to demonstrate the benefits of multi-modal analysis in social media.

  5. Multi-modal image registration using structural features.

    PubMed

    Kasiri, Keyvan; Clausi, David A; Fieguth, Paul

    2014-01-01

    Multi-modal image registration has been a challenging task in medical images because of the complex intensity relationship between images to be aligned. Registration methods often rely on the statistical intensity relationship between the images which suffers from problems such as statistical insufficiency. The proposed registration method works based on extracting structural features by utilizing the complex phase and gradient-based information. By employing structural relationships between different modalities instead of complex similarity measures, the multi-modal registration problem is converted into a mono-modal one. Therefore, conventional mono-modal similarity measures can be utilized to evaluate the registration results. This new registration paradigm has been tested on magnetic resonance (MR) brain images of different modes. The method has been evaluated based on target registration error (TRE) to determine alignment accuracy. Quantitative results demonstrate that the proposed method is capable of achieving comparable registration accuracy compared to the conventional mutual information.

  6. MINERVA - A Multi-Modal Radiation Treatment Planning System

    SciTech Connect

    D. E. Wessol; C. A. Wemple; D. W. Nigg; J. J. Cogliati; M. L. Milvich; C. Frederickson; M. Perkins; G. A. Harkin

    2004-10-01

    Recently, research efforts have begun to examine the combination of BNCT with external beam photon radiotherapy (Barth et al. 2004). In order to properly prepare treatment plans for patients being treated with combinations of radiation modalities, appropriate planning tools must be available. To facilitiate this, researchers at the Idaho National Engineering and Environmental Laboratory (INEEL)and Montana State University (MSU) have undertaken development of a fully multi-modal radiation treatment planning system.

  7. Multi-modal exercise programs for older adults.

    PubMed

    Baker, Michael K; Atlantis, Evan; Fiatarone Singh, Maria A

    2007-07-01

    Various modalities of exercise have been demonstrated to improve physical function and quality of life in older adults. Current guidelines stress the importance of multi-modal exercise for this cohort, including strengthening exercises, cardiovascular, flexibility and balance training. There is a lack of evidence, however, that simultaneously prescribed doses and intensities of strength, aerobic, and balance training in older adults are both feasible and capable of eliciting changes in physical function and quality of life. A comprehensive, systematic database search for manuscripts was performed. Two reviewers independently assessed studies for potential inclusion. Physical and functional performance outcomes were extracted. The relative effect sizes (ES) were calculated with 95% confidence intervals. Fifteen studies were included totalling 2,149 subjects; the mean cohort age ranging from 67 +/- 8 to 84 +/- 3 years. A low mean relative ES for strength was seen across the reviewed studies. Only six of the eleven studies that included balance measurements found a significant improvement in balance compared to controls. Aerobic fitness was seldom measured or reported. Five out of the six studies investigating fall rates showed a significant reduction. Functional and quality of life measures generally did not improve with exercise. Multi-modal exercise has a positive effect on falls prevention. The limited data available suggests that multi-modal exercise has a small effect on physical, functional and quality of life outcomes. Future research should include robustly designed trials that involve multi-modal exercise at individually prescribed intensities based on doses found to be effective in single-modality studies.

  8. THE OPEN SOURCING OF EPANET

    EPA Science Inventory

    A proposal was made at the 2009 EWRI Congress in Kansas City, MO to establish an Open Source Project (OSP) for the widely used EPANET pipe network analysis program. This would be an ongoing collaborative effort among a group of geographically dispersed advisors and developers, wo...

  9. THE OPEN SOURCING OF EPANET

    EPA Science Inventory

    A proposal was made at the 2009 EWRI Congress in Kansas City, MO to establish an Open Source Project (OSP) for the widely used EPANET pipe network analysis program. This would be an ongoing collaborative effort among a group of geographically dispersed advisors and developers, wo...

  10. OpenSesame: an open-source, graphical experiment builder for the social sciences.

    PubMed

    Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-06-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  11. Multi-modality image registration using the decomposition model

    NASA Astrophysics Data System (ADS)

    Ibrahim, Mazlinda; Chen, Ke

    2017-04-01

    In medical image analysis, image registration is one of the crucial steps required to facilitate automatic segmentation, treatment planning and other application involving imaging machines. Image registration, also known as image matching, aims to align two or more images so that information obtained can be compared and combined. Different imaging modalities and their characteristics make the task more challenging. We propose a decomposition model combining parametric and non-parametric deformation for multi-modality image registration. Numerical results show that the normalised gradient field perform better than the mutual information with the decomposition model.

  12. A software framework for real-time multi-modal detection of microsleeps.

    PubMed

    Knopp, Simon J; Bones, Philip J; Weddell, Stephen J; Jones, Richard D

    2017-06-01

    A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.

  13. AKM in Open Source Communities

    NASA Astrophysics Data System (ADS)

    Stamelos, Ioannis; Kakarontzas, George

    Previous chapters in this book have dealt with Architecture Knowledge Management in traditional Closed Source Software (CSS) projects. This chapterwill attempt to examine the ways that knowledge is shared among participants in Free Libre Open Source Software (FLOSS 1) projects and how architectural knowledge is managed w.r.t. CSS. FLOSS projects are organized and developed in a fundamentally different way than CSS projects. FLOSS projects simply do not develop code as CSS projects do. As a consequence, their knowledge management mechanisms are also based on different concepts and tools.

  14. A multi-modal parcellation of human cerebral cortex.

    PubMed

    Glasser, Matthew F; Coalson, Timothy S; Robinson, Emma C; Hacker, Carl D; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-08-11

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.

  15. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  16. A multi-modal parcellation of human cerebral cortex

    PubMed Central

    Glasser, Matthew F; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-01-01

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease. PMID:27437579

  17. Sorted self-similarity for multi-modal image registration.

    PubMed

    Kasiri, Keyvan; Fieguth, Paul; Clausi, David A

    2016-08-01

    In medical image analysis, registration of multimodal images has been challenging due to the complex intensity relationship between images. Classical multi-modal registration approaches evaluate the degree of the alignment by measuring the statistical dependency of the intensity values between images to be aligned. Employing statistical similarity measures, such as mutual information, is not promising in those cases with complex and spatially dependent intensity relations. A new similarity measure is proposed based on the assessing the similarity of pixels within an image, based on the idea that similar structures in an image are more probable to undergo similar intensity transformations. The most significant pixel similarity values are considered to transmit the most significant self-similarity information. The proposed method is employed in a framework to register different modalities of real brain scans and the performance of the method is compared to the conventional multi-modal registration approach. Quantitative evaluation of the method demonstrates the better registration accuracy in both rigid and non-rigid deformations.

  18. Feature-based Alignment of Volumetric Multi-modal Images

    PubMed Central

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  19. MINERVA-a multi-modal radiation treatment planning system.

    PubMed

    Wemple, C A; Wessol, D E; Nigg, D W; Cogliati, J J; Milvich, M L; Frederickson, C; Perkins, M; Harkin, G J

    2004-11-01

    Researchers at the Idaho National Engineering and Environmental Laboratory and Montana State University have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system. This system can be used for planning and analyzing several radiotherapy modalities, either singly or combined, using common modality independent image and geometry construction and dose reporting and guiding. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The MINERVA design also facilitates the future integration of improved planning technologies. The code is being developed with the Java Virtual Machine for interoperability. A full computation path has been established for molecular targeted radiotherapy treatment planning, with the associated transport plugin developed by researchers at the Lawrence Livermore National Laboratory. Development of the neutron transport plugin module is proceeding rapidly, with completion expected later this year. Future development efforts will include development of deformable registration methods, improved segmentation methods for patient model definition, and three-dimensional visualization of the patient images, geometry, and dose data. Transport and source plugins will be created for additional treatment modalities, including brachytherapy, external beam proton radiotherapy, and the EGSnrc/BEAMnrc codes for external beam photon and electron radiotherapy.

  20. Open Source: Everyone Becomes a Printer.

    ERIC Educational Resources Information Center

    Bruce, Bertram

    2000-01-01

    Discusses "open source": a method of distributing software in which programmers make available to all the actual text of their programs. Notes that this makes possible "open-source" writing in the same way that the printing press made possible "open-source" reading, enabling mass literacy. Examines implications of…

  1. Open Source: Everyone Becomes a Printer.

    ERIC Educational Resources Information Center

    Bruce, Bertram

    2000-01-01

    Discusses "open source": a method of distributing software in which programmers make available to all the actual text of their programs. Notes that this makes possible "open-source" writing in the same way that the printing press made possible "open-source" reading, enabling mass literacy. Examines implications of…

  2. The Connectome Viewer Toolkit: An Open Source Framework to Manage, Analyze, and Visualize Connectomes

    PubMed Central

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110

  3. The origin of human multi-modal communication.

    PubMed

    Levinson, Stephen C; Holler, Judith

    2014-09-19

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system.

  4. Multi-modality image registration for effective thermographic fever screening

    NASA Astrophysics Data System (ADS)

    Dwith, C. Y. N.; Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng

    2017-02-01

    Fever screening based on infrared thermographs (IRTs) is a viable mass screening approach during infectious disease pandemics, such as Ebola and Severe Acute Respiratory Syndrome (SARS), for temperature monitoring in public places like hospitals and airports. IRTs have been found to be powerful, quick and non-invasive methods for detecting elevated temperatures. Moreover, regions medially adjacent to the inner canthi (called the canthi regions in this paper) are preferred sites for fever screening. Accurate localization of the canthi regions can be achieved through multi-modality registration of infrared (IR) and white-light images. Here we propose a registration method through a coarse-fine registration strategy using different registration models based on landmarks and edge detection on eye contours. We have evaluated the registration accuracy to be within +/- 2.7 mm, which enables accurate localization of the canthi regions.

  5. Multi-modal cockpit interface for improved airport surface operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  6. Plasmonic Gold Nanostars for Multi-Modality Sensing and Diagnostics

    PubMed Central

    Liu, Yang; Yuan, Hsiangkuo; Kersey, Farrell R.; Register, Janna K.; Parrott, Matthew C.; Vo-Dinh, Tuan

    2015-01-01

    Gold nanostars (AuNSs) are unique systems that can provide a novel multifunctional nanoplatform for molecular sensing and diagnostics. The plasmonic absorption band of AuNSs can be tuned to the near infrared spectral range, often referred to as the “tissue optical window”, where light exhibits minimal absorption and deep penetration in tissue. AuNSs have been applied for detecting disease biomarkers and for biomedical imaging using multi-modality methods including surface-enhanced Raman scattering (SERS), two-photon photoluminescence (TPL), magnetic resonance imaging (MRI), positron emission tomography (PET), and X-ray computer tomography (CT) imaging. In this paper, we provide an overview of the recent development of plasmonic AuNSs in our laboratory for biomedical applications and highlight their potential for future translational medicine as a multifunctional nanoplatform. PMID:25664431

  7. Multi-modal myocontrol: Testing combined force- and electromyography.

    PubMed

    Nowak, Markus; Eiband, Thomas; Castellini, Claudio

    2017-07-01

    Myocontrol, that is control of prostheses using bodily signals, has proved in the decades to be a surprisingly hard problem for the scientific community of assistive and rehabilitation robotics. In particular, traditional surface electromyography (sEMG) seems to be no longer enough to guarantee dexterity (i.e., control over several degrees of freedom) and, most importantly, reliability. Multi-modal myocontrol is concerned with the idea of using novel signal gathering techniques as a replacement of, or alongside, sEMG, to provide high-density and diverse signals to improve dexterity and make the control more reliable. In this paper we present an offline and online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. A total number of twenty sEMG and FMG sensors were used simultaneously, in several combined configurations, to predict opening/closing of the hand and activation of two degrees of freedom of the wrist of ten intact subjects. The analysis was targeted at determining the optimal sensor combination and control parameters; the experimental results indicate that sEMG sensors alone perform worst, yielding a nRMSE of 9.1%, while mixing FMG and sEMG or using FMG only reduces the nRMSE to 5.2-6.6%. To validate these results, we engaged the subject with median performance in an online goal-reaching task. Analysis of this further experiment reveals that the online behaviour is similar to the offline one.

  8. Wearable Brain Imaging with Multi-Modal Physiological Recording.

    PubMed

    Strangman, Gary E; Ivkovic, Vladimir; Zhang, Quan

    2017-07-13

    The brain is a central component of cognitive and physical human performance. Measures including functional brain activation, cerebral perfusion, cerebral oxygenation, evoked electrical responses, and resting hemodynamic and electrical activity are all related to, or can predict health status or performance decrements. However, measuring brain physiology typically requires large, stationary machines that are not suitable for mobile or self-monitoring. Moreover, when individuals are ambulatory, systemic physiological fluctuations-e.g., in heart rate, blood pressure, skin perfusion and more-can interfere with non-invasive brain measurements. In efforts to address the physiological monitoring and performance assessment needs for astronauts during spaceflight, we have developed easy-to-use, wearable prototypes- NINscan, for near-infrared scanning-that can collect synchronized multi-modal physiology data, including hemodynamic deep-tissue imaging (including brain and muscles), electroencephalography, electrocardiography, electromyography, electrooculography, accelerometry, gyroscopy, pressure, respiration and temperature measurements. Given their self-contained and portable nature, these devices can be deployed in a much broader range of settings-including austere environments-thereby enabling a wider range of novel medical and research physiology applications. We review these, including high-altitude assessments, self-deployable multi-modal e.g., (polysomnographic) recordings in remote or low-resource environments, fluid shifts in variable-gravity or spaceflight analog environments, intra-cranial brain motion during high-impact sports, and long-duration monitoring for clinical symptom-capture in various clinical conditions. In addition to further enhancing sensitivity and miniaturization, advanced computational algorithms could help support real-time feedback and alerts regarding performance and health. Copyright © 2017, Journal of Applied Physiology.

  9. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  10. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.

    2013-12-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed

  11. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Arheimer, Berit; Pers, Charlotta; Isberg, Kristina

    2013-04-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model (Lindström et al., 2010). It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. In Sweden, the model is used by water authorities to fulfil the Water Framework Directive and the Marine Strategy Framework Directive. It is used for characterization, forecasts, and scenario analyses. Model data can be downloaded for free from three different HYPE applications: Europe (www.smhi.se/e-hype), Baltic Sea basin (www.smhi.se/balt-hype), and Sweden (vattenweb.smhi.se) The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modelling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code will be delivered frequently. The main objective of the HYPE OSC is to provide public access to a state-of-the-art operational hydrological model and to encourage hydrologic expertise from different parts of the world to contribute to model improvement. HYPE OSC is open to everyone interested in hydrology, hydrological modelling and code development - e.g. scientists, authorities, and consultancies. The HYPE Open Source Community was initiated in November 2011 by a kick-off and workshop with 50 eager participants

  12. Development of Convergence Nanoparticles for Multi-Modal Bio-Medical Imaging

    DTIC Science & Technology

    2008-09-18

    Multi-Modal Bio- Medical Imaging Key researchers: Jinwoo Cheon Affiliation: Department of Chemistry, Yonsei University Address: 134 Shinchon...01-02-2008 4. TITLE AND SUBTITLE Development of Convergence Nanoparticles for Multi-Modal Bio- Medical Imaging 5a. CONTRACT NUMBER FA48690714016

  13. Free for All: Open Source Software

    ERIC Educational Resources Information Center

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  14. Open-source software: not quite endsville.

    PubMed

    Stahl, Matthew T

    2005-02-01

    Open-source software will never achieve ubiquity. There are environments in which it simply does not flourish. By its nature, open-source development requires free exchange of ideas, community involvement, and the efforts of talented and dedicated individuals. However, pressures can come from several sources that prevent this from happening. In addition, openness and complex licensing issues invite misuse and abuse. Care must be taken to avoid the pitfalls of open-source software.

  15. Free for All: Open Source Software

    ERIC Educational Resources Information Center

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  16. Open-source hardware for medical devices.

    PubMed

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  17. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  18. Ex-vivo multi-modal microscopy of healthy skin

    NASA Astrophysics Data System (ADS)

    Guevara, Edgar; Gutiérrez-Hernández, José Manuel; Castonguay, Alexandre; Lesage, Frédéric; González, Francisco Javier

    2014-09-01

    The thorough characterization of skin samples is a critical step in investigating dermatological diseases. The combination of depth-sensitive anatomical imaging with molecular imaging has the potential to provide vast information about the skin. In this proof-of-concept work we present high-resolution mosaic images of skin biopsies using Optical Coherence Tomography (OCT) manually co-registered with standard microscopy, bi-dimensional Raman spectral mapping and fluorescence imaging. A human breast skin sample, embedded in paraffin, was imaged with a swept-source OCT system at 1310 nm. Individual OCT volumes were acquired in fully automated fashion in order to obtain a large field-of-view at high resolution (~10μm). Based on anatomical features, the other three modalities were manually co-registered to the projected OCT volume, using an affine transformation. A drawback is the manual co-registration, which may limit the utility of this method. However, the results indicate that multiple imaging modalities provide complementary information about the sample. This pilot study suggests that multi-modal microscopy may be a valuable tool in the characterization of skin biopsies.

  19. The origin of human multi-modal communication

    PubMed Central

    Levinson, Stephen C.; Holler, Judith

    2014-01-01

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system. PMID:25092670

  20. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    PubMed

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. 7 Questions to Ask Open Source Vendors

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  2. Open Source, Openness, and Higher Education

    ERIC Educational Resources Information Center

    Wiley, David

    2006-01-01

    In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…

  3. 7 Questions to Ask Open Source Vendors

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  4. Open Source 2010: Reflections on 2007

    ERIC Educational Resources Information Center

    Wheeler, Brad

    2007-01-01

    Colleges and universities and commercial firms have demonstrated great progress in realizing the vision proffered for "Open Source 2007," and 2010 will mark even greater progress. Although much work remains in refining open source for higher education applications, the signals are now clear: the collaborative development of software can provide…

  5. Open Source, Openness, and Higher Education

    ERIC Educational Resources Information Center

    Wiley, David

    2006-01-01

    In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…

  6. Deformable registration of multi-modal data including rigid structures

    SciTech Connect

    Huesman, Ronald H.; Klein, Gregory J.; Kimdon, Joey A.; Kuo, Chaincy; Majumdar, Sharmila

    2003-05-02

    Multi-modality imaging studies are becoming more widely utilized in the analysis of medical data. Anatomical data from CT and MRI are useful for analyzing or further processing functional data from techniques such as PET and SPECT. When data are not acquired simultaneously, even when these data are acquired on a dual-imaging device using the same bed, motion can occur that requires registration between the reconstructed image volumes. As the human torso can allow non-rigid motion, this type of motion should be estimated and corrected. We report a deformation registration technique that utilizes rigid registration for bony structures, while allowing elastic transformation of soft tissue to more accurately register the entire image volume. The technique is applied to the registration of CT and MR images of the lumbar spine. First a global rigid registration is performed to approximately align features. Bony structures are then segmented from the CT data using semi-automated process, and bounding boxes for each vertebra are established. Each CT subvolume is then individually registered to the MRI data using a piece-wise rigid registration algorithm and a mutual information image similarity measure. The resulting set of rigid transformations allows for accurate registration of the parts of the CT and MRI data representing the vertebrae, but not the adjacent soft tissue. To align the soft tissue, a smoothly-varying deformation is computed using a thin platespline(TPS) algorithm. The TPS technique requires a sparse set of landmarks that are to be brought into correspondence. These landmarks are automatically obtained from the segmented data using simple edge-detection techniques and random sampling from the edge candidates. A smoothness parameter is also included in the TPS formulation for characterization of the stiffness of the soft tissue. Estimation of an appropriate stiffness factor is obtained iteratively by using the mutual information cost function on the result

  7. Open3DALIGN: an open-source software aimed at unsupervised ligand alignment.

    PubMed

    Tosco, Paolo; Balle, Thomas; Shiri, Fereshteh

    2011-08-01

    An open-source, cross-platform software aimed at conformer generation and unsupervised rigid-body molecular alignment is presented. Different algorithms have been implemented to perform single and multi-conformation superimpositions on one or more templates. Alignments can be accomplished by matching pharmacophores, heavy atoms or a combination of the two. All methods have been successfully validated on eight comprehensive datasets previously gathered by Sutherland and co-workers. High computational performance has been attained through efficient parallelization of the code. The unsupervised nature of the alignment algorithms, together with its scriptable interface, make Open3DALIGN an ideal component of high-throughput, automated cheminformatics workflows.

  8. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com.

  9. A new region descriptor for multi-modal medical image registration and region detection.

    PubMed

    Xiaonan Wan; Dongdong Yu; Feng Yang; Caiyun Yang; Chengcai Leng; Min Xu; Jie Tian

    2015-08-01

    Establishing accurate anatomical correspondences plays a critical role in multi-modal medical image registration and region detection. Although many features based registration methods have been proposed to detect these correspondences, they are mostly based on the point descriptor which leads to high memory cost and could not represent local region information. In this paper, we propose a new region descriptor which depicts the features in each region, instead of in each point, as a vector. First, feature attributes of each point are extracted by a Gabor filter bank combined with a gradient filter. Then, the region descriptor is defined as the covariance of feature attributes of each point inside the region, based on which a cost function is constructed for multi-modal image registration. Finally, our proposed region descriptor is applied to both multi-modal region detection and similarity metric measurement in multi-modal image registration. Experiments demonstrate the feasibility and effectiveness of our proposed region descriptor.

  10. Pre-Motor Response Time Benefits in Multi-Modal Displays

    DTIC Science & Technology

    2013-11-12

    equivalent visual representations of these same messages. Results indicated that there was a performance benefit for concurrent message presentations...public release; distribution is unlimited. Pre-Motor Response Time Benefits in Multi-Modal Displays The views, opinions and/or findings contained in this...Time Benefits in Multi-Modal Displays Report Title The present series of experiments testes the assimilation and efficacy of purpose-created tactile

  11. Strategic Mobility 21, Inland Port - Multi-Modal Terminal Operating System Design Specification

    DTIC Science & Technology

    2007-09-25

    Commission. A federal regulatory agency that governed over the rules and regulations of the railroading industry. The ICC Termination Act of 1995 ended ...Strategic Mobility 21 Inland Port - Multi-Modal Terminal Operating System Design Specification Contractor Report 0008 Prepared for...necessary if the abstract is to be limited. Standard Form 298 Back (Rev. 8/98) Multi-Modal Terminal Operating Software System TABLE OF CONTENTS

  12. IGSTK: Framework and example application using an open source toolkit for image-guided surgery applications

    NASA Astrophysics Data System (ADS)

    Cheng, Peng; Zhang, Hui; Kim, Hee-su; Gary, Kevin; Blake, M. Brian; Gobbi, David; Aylward, Stephen; Jomier, Julien; Enquobahrie, Andinet; Avila, Rick; Ibanez, Luis; Cleary, Kevin

    2006-03-01

    Open source software has tremendous potential for improving the productivity of research labs and enabling the development of new medical applications. The Image-Guided Surgery Toolkit (IGSTK) is an open source software toolkit based on ITK, VTK, and FLTK, and uses the cross-platform tools CMAKE and DART to support common operating systems such as Linux, Windows, and MacOS. IGSTK integrates the basic components needed in surgical guidance applications and provides a common platform for fast prototyping and development of robust image-guided applications. This paper gives an overview of the IGSTK framework and current status of development followed by an example needle biopsy application to demonstrate how to develop an image-guided application using this toolkit.

  13. Multi-modal automatic montaging of adaptive optics retinal images.

    PubMed

    Chen, Min; Cooper, Robert F; Han, Grace K; Gee, James; Brainard, David H; Morgan, Jessica I W

    2016-12-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download.

  14. Multi-modal automatic montaging of adaptive optics retinal images

    PubMed Central

    Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.

    2016-01-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714

  15. Multi-modality sparse representation-based classification for Alzheimer's disease and mild cognitive impairment.

    PubMed

    Xu, Lele; Wu, Xia; Chen, Kewei; Yao, Li

    2015-11-01

    The discrimination of Alzheimer's disease (AD) and its prodromal stage known as mild cognitive impairment (MCI) from normal control (NC) is important for patients' timely treatment. The simultaneous use of multi-modality data has been demonstrated to be helpful for more accurate identification. The current study focused on extending a multi-modality algorithm and evaluating the method by identifying AD/MCI. In this study, sparse representation-based classification (SRC), a well-developed method in pattern recognition and machine learning, was extended to a multi-modality classification framework named as weighted multi-modality SRC (wmSRC). Data including three modalities of volumetric magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET from the Alzheimer's disease Neuroimaging Initiative database were adopted for AD/MCI classification (113 AD patients, 110 MCI patients and 117 NC subjects). Adopting wmSRC, the classification accuracy achieved 94.8% for AD vs. NC, 74.5% for MCI vs. NC, and 77.8% for progressive MCI vs. stable MCI, superior to or comparable with the results of some other state-of-the-art models in recent multi-modality researches. The wmSRC method is a promising tool for classification with multi-modality data. It could be effective for identifying diseases from NC with neuroimaging data, which could be helpful for the timely diagnosis and treatment of diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. The Efficient Utilization of Open Source Information

    SciTech Connect

    Baty, Samuel R.

    2016-08-11

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide key insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.

  17. Cross-Platform Development Techniques for Mobile Devices

    DTIC Science & Technology

    2013-09-01

    98 B. FUTURE WORK .......................................99 1. Security .....................................99 2. HTML5 ...Department of Defense GUI Graphical User Interface HTML5 HyperText Markup Language version 5 IDE Integrated Development Environment NPS Naval...language (such as JavaScript, Lua, or HTML5 ), the integrated development environment (IDE), the emulator, and the debugger. The cross-platform tool

  18. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  19. The 2016 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083

  20. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  1. Freeing Worldview's development process: Open source everything!

    NASA Astrophysics Data System (ADS)

    Gunnoe, T.

    2016-12-01

    Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.

  2. The 2016 Bioinformatics Open Source Conference (BOSC).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  3. A clinic compatible, open source electrophysiology system.

    PubMed

    Hermiz, John; Rogers, Nick; Kaestner, Erik; Ganji, Mehran; Cleary, Dan; Snider, Joseph; Barba, David; Dayeh, Shadi; Halgren, Eric; Gilja, Vikash

    2016-08-01

    Open source electrophysiology (ephys) recording systems have several advantages over commercial systems such as customization and affordability enabling more researchers to conduct ephys experiments. Notable open source ephys systems include Open-Ephys, NeuroRighter and more recently Willow, all of which have high channel count (64+), scalability, and advanced software to develop on top of. However, little work has been done to build an open source ephys system that is clinic compatible, particularly in the operating room where acute human electrocorticography (ECoG) research is performed. We developed an affordable (<; $10,000) and open system for research purposes that features power isolation for patient safety, compact and water resistant enclosures and 256 recording channels sampled up to 20ksam/sec, 16-bit. The system was validated by recording ECoG with a high density, thin film device for an acute, awake craniotomy study at UC San Diego, Thornton Hospital Operating Room.

  4. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  5. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  6. Low-Rank and Joint Sparse Representations for Multi-Modal Recognition.

    PubMed

    Zhang, Heng; Patel, Vishal M; Chellappa, Rama

    2017-10-01

    We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

  7. Predicting the multi-modal binding propensity of small molecules: towards an understanding of drug promiscuity.

    PubMed

    Park, Keunwan; Lee, Soyoung; Ahn, Hee-Sung; Kim, Dongsup

    2009-08-01

    Drug promiscuity is one of the key issues in current drug development. Many famous drugs have turned out to behave unexpectedly due to their propensity to bind to multiple targets. One of the primary reasons for this promiscuity is that drugs bind to multiple distinctive target environments, a feature that we call multi-modal binding. Accordingly, investigations into whether multi-modal binding propensities can be predicted, and if so, whether the features determining this behavior can be found, would be an important advance. In this study, we have developed a structure-based classifier that predicts whether small molecules will bind to multiple distinct binding sites. The binding sites for all ligands in the Protein Data Bank (PDB) were clustered by binding site similarity, and the ligands that bind to many dissimilar binding sites were identified as multi-modal binding ligands. The mono-binding ligands were also collected, and the classifiers were built using various machine-learning algorithms. A 10-fold cross-validation procedure showed 70-85% accuracy depending on the choice of machine-learning algorithm, and the different definitions used to identify multi-modal binding ligands. In addition, a quantified importance measurement for global and local descriptors was also provided, which suggests that the local features are more likely to have an effect on multi-modal binding than the global ones. The interpretable global and local descriptors were also ranked by their importance. To test the classifier on real examples, several test sets including well-known promiscuous drugs were collected by a literature and database search. Despite the difficulty in constructing appropriate testable sets, the classifier showed reasonable results that were consistent with existing information on drug behavior. Finally, a test on natural enzyme substrates and artificial drugs suggests that the natural compounds tend to exhibit a broader range of multi-modal binding than the

  8. Understanding the Requirements for Open Source Software

    DTIC Science & Technology

    2009-06-17

    fields like astrophysics that critically depend on software, open source is considered an essential precondition for research to proceed, and for...contributors or participants, new ideas, new career opportunities, and new research publications. 4.4. Condensing Discourse that Hardens and

  9. Hillmaker: an open source occupancy analysis tool.

    PubMed

    Isken, Mark W

    2005-12-01

    Managerial decision making problems in the healthcare industry often involve considerations of customer occupancy by time of day and day of week. We describe an occupancy analysis tool called Hillmaker which has been used in numerous healthcare operations studies. It is being released as a free and open source software project.

  10. There's No Need to Fear Open Source

    ERIC Educational Resources Information Center

    Balas, Janet

    2005-01-01

    The last time this author wrote about open source (OS) software was in last September's 2004 issue of Computers in Libraries, which was devoted to making the most of what you have and do-it-yourself solutions. After the column appeared, she received an e-mail from David Dorman of Index Data, who believed that she had done OS products a disservice…

  11. There's No Need to Fear Open Source

    ERIC Educational Resources Information Center

    Balas, Janet

    2005-01-01

    The last time this author wrote about open source (OS) software was in last September's 2004 issue of Computers in Libraries, which was devoted to making the most of what you have and do-it-yourself solutions. After the column appeared, she received an e-mail from David Dorman of Index Data, who believed that she had done OS products a disservice…

  12. Implementing Rakim: Open Source Chat Reference Software

    ERIC Educational Resources Information Center

    Caraway, Shawn; Payne, Susan

    2005-01-01

    This article describes the conception, implementation, and current status of Rakim open source software at Midlands Technical college in Columbia, SC. Midlands Technical College (MTC) is a 2-year school in Columbia, S.C. It has two large campuses and three smaller campuses. Although the library functions as a single unit, there are separate…

  13. Of Birkenstocks and Wingtips: Open Source Licenses

    ERIC Educational Resources Information Center

    Gandel, Paul B.; Wheeler, Brad

    2005-01-01

    The notion of collaborating to create open source applications for higher education is rapidly gaining momentum. From course management systems to ERP financial systems, higher education institutions are working together to explore whether they can in fact build a better mousetrap. As Lois Brooks, of Stanford University, recently observed, the…

  14. Of Birkenstocks and Wingtips: Open Source Licenses

    ERIC Educational Resources Information Center

    Gandel, Paul B.; Wheeler, Brad

    2005-01-01

    The notion of collaborating to create open source applications for higher education is rapidly gaining momentum. From course management systems to ERP financial systems, higher education institutions are working together to explore whether they can in fact build a better mousetrap. As Lois Brooks, of Stanford University, recently observed, the…

  15. Open-source syringe pump library.

    PubMed

    Wijnen, Bas; Hunt, Emily J; Anzalone, Gerald C; Pearce, Joshua M

    2014-01-01

    This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications.

  16. Open-Source Syringe Pump Library

    PubMed Central

    Wijnen, Bas; Hunt, Emily J.; Anzalone, Gerald C.; Pearce, Joshua M.

    2014-01-01

    This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications. PMID:25229451

  17. Communal Resources in Open Source Software Development

    ERIC Educational Resources Information Center

    Spaeth, Sebastian; Haefliger, Stefan; von Krogh, Georg; Renzl, Birgit

    2008-01-01

    Introduction: Virtual communities play an important role in innovation. The paper focuses on the particular form of collective action in virtual communities underlying as Open Source software development projects. Method: Building on resource mobilization theory and private-collective innovation, we propose a theory of collective action in…

  18. [Multi-modal treatment of patients with multiple liver metastases caused by sigmoid cancer].

    PubMed

    Sawada, S; Nagata, K; Kato, T; Oshima, T; Yoshida, M; Kawa, S; Harima, K; Tanaka, Y; Nakamura, H

    1989-05-01

    A case of sigmoid cancer with multiple liver metastases (S2PON3 + H3) who was treated by multi-modal treatment was reported. The multi-modal treatment is including intra-arterial administration of anti-cancer drugs as a pre-surgery treatment, intra-arterial infusion chemotherapy lasting for three to five weeks (three times), hyperthermia treatment combined with intra-arterial administration of anti-cancer drugs and intra-arterial expandable metalic stent. The patients lived for 2 years and 4 months in good condition.

  19. Cross-platform digital assessment forms for evaluating surgical skills.

    PubMed

    Andersen, Steven Arild Wuyts

    2015-01-01

    A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.

  20. An open-source framework for testing tracking devices using Lego Mindstorms

    NASA Astrophysics Data System (ADS)

    Jomier, Julien; Ibanez, Luis; Enquobahrie, Andinet; Pace, Danielle; Cleary, Kevin

    2009-02-01

    In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.

  1. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…

  2. Students' Multi-Modal Re-Presentations of Scientific Knowledge and Creativity

    ERIC Educational Resources Information Center

    Koren, Yitzhak; Klavir, Rama; Gorodetsky, Malka

    2005-01-01

    The paper brings the results of a project that passed on to students the opportunity for re-presenting their acquired knowledge via the construction of multi-modal "learning resources". These "learning resources" substituted for lectures and books and became the official learning sources in the classroom. The rational for the…

  3. (In)Flexibility of Constituency in Japanese in Multi-Modal Categorial Grammar with Structured Phonology

    ERIC Educational Resources Information Center

    Kubota, Yusuke

    2010-01-01

    This dissertation proposes a theory of categorial grammar called Multi-Modal Categorial Grammar with Structured Phonology. The central feature that distinguishes this theory from the majority of contemporary syntactic theories is that it decouples (without completely segregating) two aspects of syntax--hierarchical organization (reflecting…

  4. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  5. Measurement of photosynthetic response to plant water stress using a multi-modal sensing system

    USDA-ARS?s Scientific Manuscript database

    Plant yield and productivity are significantly affected by abiotic stresses such as water or nutrient deficiency. An automated, timely detection of plant stress can mitigate stress development, thereby maximizing productivity and fruit quality. A multi-modal sensing system was developed and evalua...

  6. Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge

    ERIC Educational Resources Information Center

    Blown, Eric; Bryce, Tom G. K.

    2010-01-01

    The astronomy concepts of 345 young people were studied over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The purpose of the research was to search for evidence to clarify competing theories on "conceptual…

  7. Manifold-based feature point matching for multi-modal image registration.

    PubMed

    Hu, Liang; Wang, Manning; Song, Zhijian

    2013-03-01

    Images captured using different modalities usually have significant variations in their intensities, which makes it difficult to reveal their internal structural similarities and achieve accurate registration. Most conventional feature-based image registration techniques are fast and efficient, but they cannot be used directly for the registration of multi-modal images because of these intensity variations. This paper introduces the theory of manifold learning to transform the original images into mono-modal modalities, which is a feature-based method that is applicable to multi-modal image registration. Subsequently, scale-invariant feature transform is used to detect highly distinctive local descriptors and matches between corresponding images, and a point-based registration is executed. The algorithm was tested with T1- and T2-weighted magnetic resonance (MR) images obtained from BrainWeb. Both qualitative and quantitative evaluations of the method were performed and the results compared with those produced previously. The experiments showed that feature point matching after manifold learning achieved more accurate results than did the similarity measure for multi-modal image registration. This study provides a new manifold-based feature point matching method for multi-modal medical image registration, especially for MR images. The proposed method performs better than do conventional intensity-based techniques in terms of its registration accuracy and is suitable for clinical procedures. Copyright © 2012 John Wiley & Sons, Ltd.

  8. DASC: Robust Dense Descriptor for Multi-Modal and Multi-Spectral Correspondence Estimation.

    PubMed

    Kim, Seungryong; Min, Dongbo; Ham, Bumsub; Do, Minh N; Sohn, Kwanghoon

    2017-09-01

    Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence between multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate dense multi-modal and multi-spectral correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of dense multi-modal and multi-spectral correspondences.

  9. Outcome of transarterial chemoembolization-based multi-modal treatment in patients with unresectable hepatocellular carcinoma.

    PubMed

    Song, Do Seon; Nam, Soon Woo; Bae, Si Hyun; Kim, Jin Dong; Jang, Jeong Won; Song, Myeong Jun; Lee, Sung Won; Kim, Hee Yeon; Lee, Young Joon; Chun, Ho Jong; You, Young Kyoung; Choi, Jong Young; Yoon, Seung Kew

    2015-02-28

    To investigate the efficacy and safety of transarterial chemoembolization (TACE)-based multimodal treatment in patients with large hepatocellular carcinoma (HCC). A total of 146 consecutive patients were included in the analysis, and their medical records and radiological data were reviewed retrospectively. In total, 119 patients received TACE-based multi-modal treatments, and the remaining 27 received conservative management. Overall survival (P<0.001) and objective tumor response (P=0.003) were significantly better in the treatment group than in the conservative group. After subgroup analysis, survival benefits were observed not only in the multi-modal treatment group compared with the TACE-only group (P=0.002) but also in the surgical treatment group compared with the loco-regional treatment-only group (P<0.001). Multivariate analysis identified tumor stage (P<0.001) and tumor type (P=0.009) as two independent pre-treatment factors for survival. After adjusting for significant pre-treatment prognostic factors, objective response (P<0.001), surgical treatment (P=0.009), and multi-modal treatment (P=0.002) were identified as independent post-treatment prognostic factors. TACE-based multi-modal treatments were safe and more beneficial than conservative management. Salvage surgery after successful downstaging resulted in long-term survival in patients with large, unresectable HCC.

  10. Students' Multi-Modal Re-Presentations of Scientific Knowledge and Creativity

    ERIC Educational Resources Information Center

    Koren, Yitzhak; Klavir, Rama; Gorodetsky, Malka

    2005-01-01

    The paper brings the results of a project that passed on to students the opportunity for re-presenting their acquired knowledge via the construction of multi-modal "learning resources". These "learning resources" substituted for lectures and books and became the official learning sources in the classroom. The rational for the…

  11. Information content and analysis methods for multi-modal high-throughput biomedical data.

    PubMed

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R; Picone, Marco; Poli, Tito; Aliferis, Constantin F; Statnikov, Alexander

    2014-03-21

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect "multi-modal" data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data.

  12. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    ERIC Educational Resources Information Center

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  13. Graduate Student Perceptions of Multi-Modal Tablet Use in Academic Environments

    ERIC Educational Resources Information Center

    Bryant, Ezzard C., Jr.

    2016-01-01

    The purpose of this study was to explore graduate student perceptions of use and the ease of use of multi-modal tablets to access electronic course materials, and the perceived differences based on students' gender, age, college of enrollment, and previous experience. This study used the Unified Theory of Acceptance and Use of Technology to…

  14. Graduate Student Perceptions of Multi-Modal Tablet Use in Academic Environments

    ERIC Educational Resources Information Center

    Bryant, Ezzard C., Jr.

    2016-01-01

    The purpose of this study was to explore graduate student perceptions of use and the ease of use of multi-modal tablets to access electronic course materials, and the perceived differences based on students' gender, age, college of enrollment, and previous experience. This study used the Unified Theory of Acceptance and Use of Technology to…

  15. Information content and analysis methods for Multi-Modal High-Throughput Biomedical Data

    NASA Astrophysics Data System (ADS)

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R.; Picone, Marco; Poli, Tito; Aliferis, Constantin F.; Statnikov, Alexander

    2014-03-01

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect ``multi-modal'' data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data.

  16. Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge

    ERIC Educational Resources Information Center

    Blown, Eric; Bryce, Tom G. K.

    2010-01-01

    The astronomy concepts of 345 young people were studied over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The purpose of the research was to search for evidence to clarify competing theories on "conceptual…

  17. Open-source tools for data mining.

    PubMed

    Zupan, Blaz; Demsar, Janez

    2008-03-01

    With a growing volume of biomedical databases and repositories, the need to develop a set of tools to address their analysis and support knowledge discovery is becoming acute. The data mining community has developed a substantial set of techniques for computational treatment of these data. In this article, we discuss the evolution of open-source toolboxes that data mining researchers and enthusiasts have developed over the span of a few decades and review several currently available open-source data mining suites. The approaches we review are diverse in data mining methods and user interfaces and also demonstrate that the field and its tools are ready to be fully exploited in biomedical research.

  18. Meteorological Error Budget Using Open Source Data

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...DD-MM-YYYY) September 2016 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 07/2015–08/2016 4. TITLE AND SUBTITLE Meteorological

  19. Open Source Cable Models for EMI Simulations

    NASA Astrophysics Data System (ADS)

    Greedy, S.; Smartt, C.; Thomas, D. W. P.

    2016-05-01

    This paper describes the progress of work towards an Open Source software toolset suitable for developing Spice based multi-conductor cable models. The issues related to creating a transmission line model for implementation in Spice which include the frequency dependent properties of real cables are presented and the viability of spice cable models is demonstrated through application to a three conductor crosstalk model. Development of the techniques to include models of shielded cables and incident field excitation has been demonstrated.

  20. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  1. From open source communications to knowledge

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Roberts, Colin; Rogers, David; Webberley, Will; Innes, Martin; Braines, Dave

    2016-05-01

    Rapid processing and exploitation of open source information, including social media sources, in order to shorten decision-making cycles, has emerged as an important issue in intelligence analysis in recent years. Through a series of case studies and natural experiments, focussed primarily upon policing and counter-terrorism scenarios, we have developed an approach to information foraging and framing to inform decision making, drawing upon open source intelligence, in particular Twitter, due to its real-time focus and frequent use as a carrier for links to other media. Our work uses a combination of natural language (NL) and controlled natural language (CNL) processing to support information collection from human sensors, linking and schematising of collected information, and the framing of situational pictures. We illustrate the approach through a series of vignettes, highlighting (1) how relatively lightweight and reusable knowledge models (schemas) can rapidly be developed to add context to collected social media data, (2) how information from open sources can be combined with reports from trusted observers, for corroboration or to identify con icting information; and (3) how the approach supports users operating at or near the tactical edge, to rapidly task information collection and inform decision-making. The approach is supported by bespoke software tools for social media analytics and knowledge management.

  2. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  3. Open Source Approach to Urban Growth Simulation

    NASA Astrophysics Data System (ADS)

    Petrasova, A.; Petras, V.; Van Berkel, D.; Harmon, B. A.; Mitasova, H.; Meentemeyer, R. K.

    2016-06-01

    Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban - Regional Environment Simulation) - a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.

  4. SeqKit: A Cross-Platform and Ultrafast Toolkit for FASTA/Q File Manipulation.

    PubMed

    Shen, Wei; Le, Shuai; Li, Yan; Hu, Fuquan

    2016-01-01

    FASTA and FASTQ are basic and ubiquitous formats for storing nucleotide and protein sequences. Common manipulations of FASTA/Q file include converting, searching, filtering, deduplication, splitting, shuffling, and sampling. Existing tools only implement some of these manipulations, and not particularly efficiently, and some are only available for certain operating systems. Furthermore, the complicated installation process of required packages and running environments can render these programs less user friendly. This paper describes a cross-platform ultrafast comprehensive toolkit for FASTA/Q processing. SeqKit provides executable binary files for all major operating systems, including Windows, Linux, and Mac OSX, and can be directly used without any dependencies or pre-configurations. SeqKit demonstrates competitive performance in execution time and memory usage compared to similar tools. The efficiency and usability of SeqKit enable researchers to rapidly accomplish common FASTA/Q file manipulations. SeqKit is open source and available on Github at https://github.com/shenwei356/seqkit.

  5. Use of Multi-Modal Media and Tools in an Online Information Literacy Course: College Students' Attitudes and Perceptions

    ERIC Educational Resources Information Center

    Chen, Hsin-Liang; Williams, James Patrick

    2009-01-01

    This project studies the use of multi-modal media objects in an online information literacy class. One hundred sixty-two undergraduate students answered seven surveys. Significant relationships are found among computer skills, teaching materials, communication tools and learning experience. Multi-modal media objects and communication tools are…

  6. Open Source GIS based integrated watershed management

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address

  7. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  8. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  9. OpenMS: a flexible open-source software platform for mass spectrometry data analysis.

    PubMed

    Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver

    2016-08-30

    High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.

  10. Sensorcaching: An Open-Source platform for citizen science and environmental monitoring

    NASA Astrophysics Data System (ADS)

    O'Keefe, Michael

    Sensorcaching is an Open-Source hardware and software project designed with several goals in mind. It allows for long-term environmental monitoring with low cost and low power-usage hardware. It encourages citizens to take an active role in the health of their community by providing the means to record and explore changes in their environment. And it provides opportunities for education about the necessity and techniques of studying our planet. Sensorcaching is a 3-part project, consisting of a hardware sensor, a cross-platform mobile application, and a web platform for data aggregation. Its evolution has been driven by the desire to allow for long-term environmental monitoring by laypeople without significant capital expenditures or onerous technical burdens.

  11. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  12. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    PubMed

    Dryden, Michael D M; Wheeler, Aaron R

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.

  13. The open-source neuroimaging research enterprise.

    PubMed

    Marcus, Daniel S; Archie, Kevin A; Olsen, Timothy R; Ramaratnam, Mohana

    2007-11-01

    While brain imaging in the clinical setting is largely a practice of looking at images, research neuroimaging is a quantitative and integrative enterprise. Images are run through complex batteries of processing and analysis routines to generate numeric measures of brain characteristics. Other measures potentially related to brain function - demographics, genetics, behavioral tests, neuropsychological tests - are key components of most research studies. The canonical scanner - PACS - viewing station axis used in clinical practice is therefore inadequate for supporting neuroimaging research. Here, we model the neuroimaging research enterprise as a workflow. The principal components of the workflow include data acquisition, data archiving, data processing and analysis, and data utilization. We also describe a set of open-source applications to support each step of the workflow and the transitions between these steps. These applications include DIGITAL IMAGING AND COMMUNICATIONS IN MEDICINE viewing and storage tools, the EXTENSIBLE NEUROIMAGING ARCHIVE TOOLKIT data archiving and exploration platform, and an engine for running processing/analysis pipelines. The overall picture presented is aimed to motivate open-source developers to identify key integration and communication points for interoperating with complimentary applications.

  14. Spatial rainfall data in open source environment

    NASA Astrophysics Data System (ADS)

    Schuurmans, Hanneke; Maarten Verbree, Jan; Leijnse, Hidde; van Heeringen, Klaas-Jan; Uijlenhoet, Remko; Bierkens, Marc; van de Giesen, Nick; Gooijer, Jan; van den Houten, Gert

    2013-04-01

    Since January 2013 The Netherlands have access to innovative high-quality rainfall data that is used for watermanagers. This product is innovative because of the following reasons. (i) The product is developed in a 'golden triangle' construction - corporation between government, business and research. (ii) Second the rainfall products are developed according to the open-source GPL license. The initiative comes from a group of water boards in the Netherlands that joined their forces to fund the development of a new rainfall product. Not only data from Dutch radar stations (as is currently done by the Dutch meteorological organization KNMI) is used but also data from radars in Germany and Belgium. After a radarcomposite is made, it is adjusted according to data from raingauges (ground truth). This results in 9 different rainfall products that give for each moment the best rainfall data. Specific knowledge is necessary to develop these kind of data. Therefore a pool of experts (KNMI, Deltares and 3 universities) participated in the development. The philosophy of the developers (being corporations) is that products like this should be developed in open source. This way knowledge is shared and the whole community is able to make suggestions for improvement. In our opinion this is the only way to make real progress in product development. Furthermore the financial resources of government organizations are optimized. More info (in Dutch): www.nationaleregenradar.nl

  15. Open Source Software to Control Bioflo Bioreactors

    PubMed Central

    Burdge, David A.; Libourel, Igor G. L.

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW. PMID:24667828

  16. Open source software to control Bioflo bioreactors.

    PubMed

    Burdge, David A; Libourel, Igor G L

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.

  17. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    PubMed

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  18. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    SciTech Connect

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learning system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.

  19. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  20. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  1. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records

    PubMed Central

    Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176

  2. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records.

    PubMed

    Peissig, Peggy L; Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries.

  3. Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia

    DTIC Science & Technology

    2015-10-01

    eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH

  4. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  5. A flexible graphical model for multi-modal parcellation of the cortex.

    PubMed

    Parisot, Sarah; Glocker, Ben; Ktena, Sofia Ira; Arslan, Salim; Schirmer, Markus D; Rueckert, Daniel

    2017-09-06

    Advances in neuroimaging have provided a tremendous amount of in-vivo information on the brain's organisation. Its anatomy and cortical organisation can be investigated from the point of view of several imaging modalities, many of which have been studied for mapping functionally specialised cortical areas. There is strong evidence that a single modality is not sufficient to fully identify the brain's cortical organisation. Combining multiple modalities in the same parcellation task has the potential to provide more accurate and robust subdivisions of the cortex. Nonetheless, existing brain parcellation methods are typically developed and tested on single modalities using a specific type of information. In this paper, we propose Graph-based Multi-modal Parcellation (GraMPa), an iterative framework designed to handle the large variety of available input modalities to tackle the multi-modal parcellation task. At each iteration, we compute a set of parcellations from different modalities and fuse them based on their local reliabilities. The fused parcellation is used to initialise the next iteration, forcing the parcellations to converge towards a set of mutually informed modality specific parcellations, where correspondences are established. We explore two different multi-modal configurations for group-wise parcellation using resting-state fMRI, diffusion MRI tractography, myelin maps and task fMRI. Quantitative and qualitative results on the Human Connectome Project database show that integrating multi-modal information yields a stronger agreement with well established atlases and more robust connectivity networks that provide a better representation of the population. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    of Miniature Multi-Modal Mobile Smart Robots over a Network The Pennsylvania State University has developed the Networked Robotic Systems Laboratory...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Robots , Computer Network REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S

  7. A Study of Clinically Related Open Source Software Projects

    PubMed Central

    Hogarth, Michael A.; Turner, Stuart

    2005-01-01

    Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the ‘fit’ of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056

  8. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    DOE PAGES

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; ...

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fieldsmore » as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.« less

  9. EVolution: an edge-based variational method for non-rigid multi-modal image registration

    NASA Astrophysics Data System (ADS)

    de Senneville, B. Denis; Zachiu, C.; Ries, M.; Moonen, C.

    2016-10-01

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  10. Multi-modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment.

    PubMed

    Li, Qing; Wu, Xia; Xu, Lele; Chen, Kewei; Yao, Li; Li, Rui

    2017-10-01

    The differentiation of mild cognitive impairment (MCI), which is the prodromal stage of Alzheimer's disease (AD), from normal control (NC) is important as the recent research emphasis on early pre-clinical stage for possible disease abnormality identification, intervention and even possible prevention. The current study puts forward a multi-modal supervised within-class-similarity discriminative dictionary learning algorithm (SCDDL) we introduced previously for distinguishing MCI from NC. The proposed new algorithm was based on weighted combination and named as multi-modality SCDDL (mSCDDL). Structural magnetic resonance imaging (sMRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET data of 113 AD patients, 110 MCI patients and 117 NC subjects from the Alzheimer's disease Neuroimaging Initiative database were adopted for classification between MCI and NC, as well as between AD and NC. Adopting mSCDDL, the classification accuracy achieved 98.5% for AD vs. NC and 82.8% for MCI vs. NC, which were superior to or comparable with the results of some other state-of-the-art approaches as reported in recent multi-modality publications. The mSCDDL procedure was a promising tool in assisting early diseases diagnosis using neuroimaging data. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  12. EVolution: an edge-based variational method for non-rigid multi-modal image registration.

    PubMed

    Denis de Senneville, B; Zachiu, C; Ries, M; Moonen, C

    2016-10-21

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  13. Multi-modal image registration based on gradient orientations of minimal uncertainty.

    PubMed

    De Nigris, Dante; Collins, D Louis; Arbel, Tal

    2012-12-01

    In this paper, we propose a new multi-scale technique for multi-modal image registration based on the alignment of selected gradient orientations of reduced uncertainty. We show how the registration robustness and accuracy can be improved by restricting the evaluation of gradient orientation alignment to locations where the uncertainty of fixed image gradient orientations is minimal, which we formally demonstrate correspond to locations of high gradient magnitude. We also embed a computationally efficient technique for estimating the gradient orientations of the transformed moving image (rather than resampling pixel intensities and recomputing image gradients). We have applied our method to different rigid multi-modal registration contexts. Our approach outperforms mutual information and other competing metrics in the context of rigid multi-modal brain registration, where we show sub-millimeter accuracy with cases obtained from the retrospective image registration evaluation project. Furthermore, our approach shows significant improvements over standard methods in the highly challenging clinical context of image guided neurosurgery, where we demonstrate misregistration of less than 2 mm with relation to expert selected landmarks for the registration of pre-operative brain magnetic resonance images to intra-operative ultrasound images.

  14. Information content and analysis methods for Multi-Modal High-Throughput Biomedical Data

    PubMed Central

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R.; Picone, Marco; Poli, Tito; Aliferis, Constantin F.; Statnikov, Alexander

    2014-01-01

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect “multi-modal” data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data. PMID:24651673

  15. Hand hygiene and healthcare system change within multi-modal promotion: a narrative review.

    PubMed

    Allegranzi, B; Sax, H; Pittet, D

    2013-02-01

    Many factors may influence the level of compliance with hand hygiene recommendations by healthcare workers. Lack of products and facilities as well as their inappropriate and non-ergonomic location represent important barriers. Targeted actions aimed at making hand hygiene practices feasible during healthcare delivery by ensuring that the necessary infrastructure is in place, defined as 'system change', are essential to improve hand hygiene in healthcare. In particular, access to alcohol-based hand rubs (AHRs) enables appropriate and timely hand hygiene performance at the point of care. The feasibility and impact of system change within multi-modal strategies have been demonstrated both at institutional level and on a large scale. The introduction of AHRs overcomes some important barriers to best hand hygiene practices and is associated with higher compliance, especially when integrated within multi-modal strategies. Several studies demonstrated the association between AHR consumption and reduction in healthcare-associated infection, in particular, meticillin-resistant Staphylococcus aureus bacteraemia. Recent reports demonstrate the feasibility and success of system change implementation on a large scale. The World Health Organization and other investigators have reported the challenges and encouraging results of implementing hand hygiene improvement strategies, including AHR introduction, in settings with limited resources. This review summarizes the available evidence demonstrating the need for system change and its importance within multi-modal hand hygiene improvement strategies. This topic is also discussed in a global perspective and highlights some controversial issues. Copyright © 2013 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  16. Evaluation of registration strategies for multi-modality images of rat brain slices

    NASA Astrophysics Data System (ADS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-05-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  17. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    SciTech Connect

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; Nazikian, Raffi

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fields as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.

  18. A multi-modal face recognition method using complete local derivative patterns and depth maps.

    PubMed

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-10-20

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features.

  19. A new multi-modal fractional ablative CO2 laser for wrinkle reduction and skin resurfacing.

    PubMed

    Clementoni, Matteo Tretti; Lavagno, Rosalia; Munavalli, Girish

    2012-12-01

    The concept of fractional delivery modality of the energy with both ablative and non-ablative devices is now well known and accepted as an effective method to attain significant aesthetic improvements on facial aging skin. A new, multi-modal, fractional, ablative CO2 laser that can create, using same scanner/handpiece, deep columns in addition to superficial ablation has been recently proposed and therefore investigated. Twenty-four patients were enrolled in this evaluation. Each of them received one multi-modal, fractional ablative treatment. Patients were clinically and photographically evaluated at baseline and 6 months after the procedure. The degree of photoaging and the efficacy of treatment were evaluated using a VAS five-point scale of some skin features. A 3D images comparison was furthermore performed to objectify the improvements. For all of the analysed skin features of photodamage a significant, statistical improvement was obtained. The data collected with the 3D system demonstrated an average improvement of 42% of the wrinkles and an average improvement of the melanin variation of 40.1%. The multi-modal approach with a single handpiece allows obtaining good outcomes with a very low incidence of adverse effects and a short downtime.

  20. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    SciTech Connect

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; Nazikian, Raffi

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fields as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.

  1. Novel software package for cross-platform transcriptome analysis (CPTRA)

    PubMed Central

    2009-01-01

    Background Next-generation sequencing techniques enable several novel transcriptome profiling approaches. Recent studies indicated that digital gene expression profiling based on short sequence tags has superior performance as compared to other transcriptome analysis platforms including microarrays. However, the transcriptomic analysis with tag-based methods often depends on available genome sequence. The use of tag-based methods in species without genome sequence should be complemented by other methods such as cDNA library sequencing. The combination of different next generation sequencing techniques like 454 pyrosequencing and Illumina Genome Analyzer (Solexa) will enable high-throughput and accurate global gene expression profiling in species with limited genome information. The combination of transcriptome data acquisition methods requires cross-platform transcriptome data analysis platforms, including a new software package for data processing. Results Here we presented a software package, CPTRA: Cross-Platform TRanscriptome Analysis, to analyze transcriptome profiling data from separate methods. The software package is available at http://people.tamu.edu/~syuan/cptra/cptra.html. It was applied to the case study of non-target site glyphosate resistance in horseweed; and the data was mined to discover resistance target gene(s). For the software, the input data included a long-read sequence dataset with proper annotation, and a short-read sequence tag dataset for the quantification of transcripts. By combining the two datasets, the software carries out the unique sequence tag identification, tag counting for transcript quantification, and cross-platform sequence matching functions, whereby the short sequence tags can be annotated with a function, level of expression, and Gene Ontology (GO) classification. Multiple sequence search algorithms were implemented and compared. The analysis highlighted the importance of transport genes in glyphosate resistance and

  2. An open source simulator for water management

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Selby, Philip; Mohammed, Khaled; Khadem, Majed; Padula, Silvia; Harou, Julien; Rosenberg, David; Rheinheimer, David

    2015-04-01

    Descriptive modelling of water resource systems requires the representation of different aspects in one model: the physical system including hydrological inputs and engineered infrastructure, and human management, including social, economic and institutional behaviours and constraints. Although most water resource systems share some characteristics such as the ability to represent them as a network of nodes and links, geographical, institutional and other differences mean that invariably each water system functions in a unique way. A diverse group is developing an open source simulation framework which will allow model developers to build generalised water management models that are customised to the institutional, physical and economical components they are seeking to model. The framework will allow the simulation of complex individual and institutional behaviour required for the assessment of real-world resource systems. It supports the spatial and hierarchical structures commonly found in water resource systems. The individual infrastructures can be operated by different actors while policies are defined at a regional level by one or more institutional actors. The framework enables building multi-agent system simulators in which developers can define their own agent types and add their own decision making code. Developers using the framework have two main tasks: (i) Extend the core classes to represent the aspects of their particular system, and (ii) write model structure files. Both are done in Python. For task one, users must either write new decision making code for each class or link to an existing code base to provide functionality to each of these extension classes. The model structure file links these extension classes in a standardised way to the network topology. The framework will be open-source and written in Python and is to be available directly for download through standard installer packages. Many water management model developers are unfamiliar

  3. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    ERIC Educational Resources Information Center

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  4. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  5. The Open Source Teaching Project (OSTP): Research Note.

    ERIC Educational Resources Information Center

    Hirst, Tony

    The Open Source Teaching Project (OSTP) is an attempt to apply a variant of the successful open source software approach to the development of educational materials. Open source software is software licensed in such a way as to allow anyone the right to modify and use it. From such a simple premise, a whole industry has arisen, most notably in the…

  6. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    ERIC Educational Resources Information Center

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  7. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  8. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  9. Open-Source Instructional Materials in Astronomy

    NASA Astrophysics Data System (ADS)

    Robertson, T. H.

    2004-12-01

    Instructional materials are being developed in an open-source environment for introductory astronomy courses. These materials are being developed on, and will be available through, the LON-CAPA network accessed through the internet. Advantages of this system, which include materials sharing, free-software, search capabilities, context sensitive help and branching, metadata and on-line evaluation, will be discussed. Materials developed to date are limited primarily to personalized homework with a variety of question types for large (n = 100 student) classes at the Astronomy 101 and algebra-based astronomy levels. A progress report, as well as preliminary assessment data, will be provided on the scope of materials developed to date. Plans for future expansion will be presented. This work was funded in part by grants from Ball State University.

  10. An Affordable Open-Source Turbidimeter

    PubMed Central

    Kelley, Christopher D.; Krolick, Alexander; Brunner, Logan; Burklund, Alison; Kahn, Daniel; Ball, William P.; Weber-Shirk, Monroe

    2014-01-01

    Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring. PMID:24759114

  11. NMRFx Processor: a cross-platform NMR data processing program.

    PubMed

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  12. methyLiftover: cross-platform DNA methylation data integration.

    PubMed

    Titus, Alexander J; Houseman, E Andrés; Johnson, Kevin C; Christensen, Brock C

    2016-08-15

    : The public availability of high throughput molecular data provides new opportunities for researchers to advance discovery, replication and validation efforts. One common challenge in leveraging such data is the diversity of measurement approaches and platforms and a lack of utilities enabling cross-platform comparisons among data sources for analysis. We present a method to map DNA methylation data from bisulfite sequencing approaches to CpG sites measured with the widely used Illumina methylation bead-array platforms. Correlations and median absolute deviations support the validity of using bisulfite sequencing data in combination with Illumina bead-array methylation data. https://github.com/Christensen-Lab-Dartmouth/methyLiftover includes source, documentation and data references. brock.c.christensen@dartmouth.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Cross-platform hypermedia examinations on the Web.

    PubMed Central

    Williams, T. W.; Giuse, N. B.; Huber, J. T.; Janco, R. L.

    1995-01-01

    The authors developed a multiple-choice medical testing system delivered using the World Wide Web. It evolved from an older, single-platform, locally-developed computer-based examination. The old system offered a number of advantages over traditional paper-based examinations, such as digital graphics and quicker, easier scoring. The new system builds on these advantages with its true cross-platform design and the addition of hypertext learning responses. The benefits of this system will increase as more medical educational resources migrate to the Web. Faculty and student feedback has been positive. The authors encourage other institutions to experiment with Web-based teaching materials, including examinations. PMID:8563333

  14. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  15. An open source business model for malaria.

    PubMed

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  16. Open Source Hardware for DIY Environmental Sensing

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  17. Visi—A VTK- and QT-Based Open-Source Project for Scientific Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Chen, Cheng-Kai

    2009-03-01

    In this paper, we present an open-source project, Visi for high-dimensional engineering and scientific data visualization. Visi is with state-of-the-art interactive user interface and graphics kernels based upon Qt (a cross-platform GUI toolkit) and VTK (an object-oriented visualization library). For an initialization of Visi, a preliminary window will be activated by Qt, and the kernel of VTK is simultaneously embedded into the window, where the graphics resources are allocated. Representation of visualization is through an interactive interface so that the data will be rendered according to user's preference. The developed framework possesses high flexibility and extensibility for advanced functions (e.g., object combination, etc) and further applications. Application of Visi to data visualization in various fields, such as protein structure in bioinformatics, 3D semiconductor transistor, and interconnect of very-large scale integration (VLSI) layout is also illustrated to show the performance of Visi. The developed open-source project is available in our project website on the internet [1].

  18. Open source PIV software applied to streaming, time-resolved PIV data

    NASA Astrophysics Data System (ADS)

    Taylor, Zachary; Gurka, Roi; Liberzon, Alex; Kopp, Gregory

    2008-11-01

    The data handling requirements for time resolved PIV data have increased substantially in recent years as the advent in high speed imaging and real time streaming. Therefore, there is a need for new hardware and software solutions for data storage and analysis. The presented solution is based on open source software (OSS) which has proven to be a successful means of development. This includes the PIV algorithms and flow analysis software. The solution, based on OSS known as ``URAPIV,'' originally was developed in Matlab and recently available in Python. The advantage of these scripting languages lies within their highly customizable platform; however, their routines cannot compete with commercially available software for computational speed. Thus, an effort has been undertaken to develop URAPIV-C++, a GUI based on the Qt 4 cross-platform open source library. This provides users with features commonly found in commercial packages and is comparable in processing speed to the commercial packages. The uniqueness of this package is in its complete handling of PIV experiments from the algorithms to post analysis under OSS license for large data sets. The package and its features are utilized in the recent STR-PIV system, which will be operable at the Advanced Facility for Avian Research at UWO. The wake flow behind an elongated body will be presented as a demonstration.

  19. Developing an Open Source Option for NASA Software

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Parks, John W. (Technical Monitor)

    2003-01-01

    We present arguments in favor of developing an Open Source option for NASA software; in particular we discuss how Open Source is compatible with NASA's mission. We compare and contrast several of the leading Open Source licenses, and propose one - the Mozilla license - for use by NASA. We also address some of the related issues for NASA with respect to Open Source. In particular, we discuss some of the elements in the External Release of NASA Software document (NPG 2210.1A) that will likely have to be changed in order to make Open Source a reality withm the agency.

  20. Query Health: standards-based, cross-platform population health surveillance.

    PubMed

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under

  1. Query Health: standards-based, cross-platform population health surveillance

    PubMed Central

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371

  2. Open Source Testing Capability for Geospatial Software

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2013-12-01

    resource for technologists responsible for interoperability among scientific tools that are used for sharing data and linking models, both within and between Earth science disciplines. This presentation will focus on the OGC compliance infrastructure and its open source tools, open source tests and and open issue tracker that can be used to improve scientific software. [1] http://www.opengeospatial.org/resource/products/stats [2] http://cite.opengeospatial.org/teamengine/ [3] http://cite.opengeospatial.org/te2

  3. The Open Source Snowpack modelling ecosystem

    NASA Astrophysics Data System (ADS)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  4. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration

    PubMed Central

    Dryden, Michael D. M.; Wheeler, Aaron R.

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100

  5. Open-source solutions for SPIMage processing.

    PubMed

    Schmied, Christopher; Stamataki, Evangelia; Tomancak, Pavel

    2014-01-01

    Light sheet microscopy is an emerging technique allowing comprehensive visualization of dynamic biological processes, at high spatial and temporal resolution without significant damage to the sample by the imaging process itself. It thus lends itself to time-lapse observation of fluorescently labeled molecular markers over long periods of time in a living specimen. In combination with sample rotation light sheet microscopy and in particular its selective plane illumination microscopy (SPIM) flavor, enables imaging of relatively large specimens, such as embryos of animal model organisms, in their entirety. The benefits of SPIM multiview imaging come to the cost of image data postprocessing necessary to deliver the final output that can be analyzed. Here, we provide a set of practical recipes that walk biologists through the complex processes of SPIM data registration, fusion, deconvolution, and time-lapse registration using publicly available open-source tools. We explain, in plain language, the basic principles behind SPIM image-processing algorithms that should enable users to make informed decisions during parameter tuning of the various processing steps applied to their own datasets. Importantly, the protocols presented here are applicable equally to processing of multiview SPIM data from the commercial Zeiss Lightsheet Z.1 microscope and from the open-access SPIM platforms such as OpenSPIM. © 2014 Elsevier Inc. All rights reserved.

  6. XNAT Central: Open sourcing imaging research data.

    PubMed

    Herrick, Rick; Horton, William; Olsen, Timothy; McKay, Michael; Archie, Kevin A; Marcus, Daniel S

    2016-01-01

    XNAT Central is a publicly accessible medical imaging data repository based on the XNAT open-source imaging informatics platform. It hosts a wide variety of research imaging data sets. The primary motivation for creating XNAT Central was to provide a central repository to host and provide access to a wide variety of neuroimaging data. In this capacity, XNAT Central hosts a number of data sets from research labs and investigative efforts from around the world, including the OASIS Brains imaging studies, the NUSDAST study of schizophrenia, and more. Over time, XNAT Central has expanded to include imaging data from many different fields of research, including oncology, orthopedics, cardiology, and animal studies, but continues to emphasize neuroimaging data. Through the use of XNAT's DICOM metadata extraction capabilities, XNAT Central provides a searchable repository of imaging data that can be referenced by groups, labs, or individuals working in many different areas of research. The future development of XNAT Central will be geared towards greater ease of use as a reference library of heterogeneous neuroimaging data and associated synthetic data. It will also become a tool for making data available supporting published research and academic articles.

  7. An open-source laser electronics suite

    NASA Astrophysics Data System (ADS)

    Pisenti, Neal C.; Reschovsky, Benjamin J.; Barker, Daniel S.; Restelli, Alessandro; Campbell, Gretchen K.

    2016-05-01

    We present an integrated set of open-source electronics for controlling external-cavity diode lasers and other instruments in the laboratory. The complete package includes a low-noise circuit for driving high-voltage piezoelectric actuators, an ultra-stable current controller based on the design of, and a high-performance, multi-channel temperature controller capable of driving thermo-electric coolers or resistive heaters. Each circuit (with the exception of the temperature controller) is designed to fit in a Eurocard rack equipped with a low-noise linear power supply capable of driving up to 5 A at +/- 15 V. A custom backplane allows signals to be shared between modules, and a digital communication bus makes the entire rack addressable by external control software over TCP/IP. The modular architecture makes it easy for additional circuits to be designed and integrated with existing electronics, providing a low-cost, customizable alternative to commercial systems without sacrificing performance.

  8. Open source integrated modeling environment Delta Shell

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  9. Multi-modal imaging and cancer therapy using lanthanide oxide nanoparticles: current status and perspectives.

    PubMed

    Park, J Y; Chang, Y; Lee, G H

    2015-01-01

    Biomedical imaging is an essential tool for diagnosis and therapy of diseases such as cancers. It is likely true that medicine has developed with biomedical imaging methods. Sensitivity and resolution of biomedical imaging methods can be improved with imaging agents. Furthermore, it will be ideal if imaging agents could be also used as therapeutic agents. Therefore, one dose can be used for both diagnosis and therapy of diseases (i.e., theragnosis). This will simplify medical treatment of diseases, and will be also a benefit to patients. Mixed (Ln(1x)Ln(2y)O3, x + y = 2) or unmixed (Ln2O3) lanthanide (Ln) oxide nanoparticles (Ln = Eu, Gd, Dy, Tb, Ho, Er) are potential multi-modal imaging and cancer therapeutic agents. The lanthanides have a variety of magnetic and optical properties, useful for magnetic resonance imaging (MRI) and fluorescent imaging (FI), respectively. They also highly attenuate X-ray beam, useful for X-ray computed tomography (CT). In addition gadolinium-157 ((157)Gd) has the highest thermal neutron capture cross section among stable radionuclides, useful for gadolinium neutron capture therapy (GdNCT). Therefore, mixed or unmixed lanthanide oxide nanoparticles can be used for multi-modal imaging methods (i.e., MRI-FI, MRI-CT, CT-FI, and MRICT- FI) and cancer therapy (i.e., GdNCT). Since mixed or unmixed lanthanide oxide nanoparticles are single-phase and solid-state, they can be easily synthesized, and are compact and robust, which will be beneficial to biomedical applications. In this review physical properties of the lanthanides, synthesis, characterizations, multi-modal imagings, and cancer therapy of mixed and unmixed lanthanide oxide nanoparticles are discussed.

  10. A hypo-status in drug-dependent brain revealed by multi-modal MRI.

    PubMed

    Wang, Ze; Suh, Jesse; Duan, Dingna; Darnley, Stefanie; Jing, Ying; Zhang, Jian; O'Brien, Charles; Childress, Anna Rose

    2016-09-22

    Drug addiction is a chronic brain disorder with no proven effective cure. Assessing both structural and functional brain alterations by using multi-modal, rather than purely unimodal imaging techniques, may provide a more comprehensive understanding of the brain mechanisms underlying addiction, which in turn may facilitate future treatment strategies. However, this type of research remains scarce in the literature. We acquired multi-modal magnetic resonance imaging from 20 cocaine-addicted individuals and 19 age-matched controls. Compared with controls, cocaine addicts showed a multi-modal hypo-status with (1) decreased brain tissue volume in the medial and lateral orbitofrontal cortex (OFC); (2) hypo-perfusion in the prefrontal cortex, anterior cingulate cortex, insula, right temporal cortex and dorsolateral prefrontal cortex and (3) reduced irregularity of resting state activity in the OFC and limbic areas, as well as the cingulate, visual and parietal cortices. In the cocaine-addicted brain, larger tissue volume in the medial OFC, anterior cingulate cortex and ventral striatum and smaller insular tissue volume were associated with higher cocaine dependence levels. Decreased perfusion in the amygdala and insula was also correlated with higher cocaine dependence levels. Tissue volume, perfusion, and brain entropy in the insula and prefrontal cortex, all showed a trend of negative correlation with drug craving scores. The three modalities showed voxel-wise correlation in various brain regions, and combining them improved patient versus control brain classification accuracy. These results, for the first time, demonstrate a comprehensive cocaine-dependence and craving-related hypo-status regarding the tissue volume, perfusion and resting brain irregularity in the cocaine-addicted brain. © 2016 Society for the Study of Addiction.

  11. A graph-based approach for the retrieval of multi-modality medical images.

    PubMed

    Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan

    2014-02-01

    In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state

  12. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.

    PubMed

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-03-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.

  13. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  14. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  15. A low-power multi-modal body sensor network with application to epileptic seizure monitoring.

    PubMed

    Altini, Marco; Del Din, Silvia; Patel, Shyamal; Schachter, Steven; Penders, Julien; Bonato, Paolo

    2011-01-01

    Monitoring patients' physiological signals during their daily activities in the home environment is one of the challenge of the health care. New ultra-low-power wireless technologies could help to achieve this goal. In this paper we present a low-power, multi-modal, wearable sensor platform for the simultaneous recording of activity and physiological data. First we provide a description of the wearable sensor platform, and its characteristics with respect to power consumption. Second we present the preliminary results of the comparison between our sensors and a reference system, on healthy subjects, to test the reliability of the detected physiological (electrocardiogram and respiration) and electromyography signals.

  16. Continuous multi-modality brain imaging reveals modified neurovascular seizure response after intervention

    PubMed Central

    Ringuette, Dene; Jeffrey, Melanie A.; Dufour, Suzie; Carlen, Peter L.; Levi, Ofer

    2017-01-01

    We developed a multi-modal brain imaging system to investigate the relationship between blood flow, blood oxygenation/volume, intracellular calcium and electrographic activity during acute seizure-like events (SLEs), both before and after pharmacological intervention. Rising blood volume was highly specific to SLE-onset whereas blood flow was more correlated with all eletrographic activity. Intracellular calcium spiked between SLEs and at SLE-onset with oscillation during SLEs. Modified neurovascular and ionic SLE responses were observed after intervention and the interval between SLEs became shorter and more inconsistent. Comparison of artery and vein pulsatile flow suggest proximal interference and greater vascular leakage prior to intervention. PMID:28270990

  17. Multi-modal analysis for person type classification in news video

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Hauptmann, Alexander G.

    2004-12-01

    Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an im-portant topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a statistical learning approach to combine all the features for person type classifica-tion. Experiments conducted on ABC World News Tonight video have demonstrated the effectiveness of the approach, and the contributions of different categories of features have been compared.

  18. Multi-modal analysis for person type classification in news video

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Hauptmann, Alexander G.

    2005-01-01

    Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an im-portant topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a statistical learning approach to combine all the features for person type classifica-tion. Experiments conducted on ABC World News Tonight video have demonstrated the effectiveness of the approach, and the contributions of different categories of features have been compared.

  19. Multi-modal spectroscopic imaging with synchrotron light to study mechanisms of brain disease

    NASA Astrophysics Data System (ADS)

    Summers, Kelly L.; Fimognari, Nicholas; Hollings, Ashley; Kiernan, Mitchell; Lam, Virginie; Tidy, Rebecca J.; Takechi, Ryu; George, Graham N.; Pickering, Ingrid J.; Mamo, John C.; Harris, Hugh H.; Hackett, Mark J.

    2017-04-01

    The international health care costs associated with Alzheimer's disease (AD) and dementia have been predicted to reach $2 trillion USD by 2030. As such, there is urgent need to develop new treatments and diagnostic methods to stem an international health crisis. A major limitation to therapy and diagnostic development is the lack of complete understanding about the disease mechanisms. Spectroscopic methods at synchrotron light sources, such as FTIR, XRF, and XAS, offer a "multi-modal imaging platform" to reveal a wealth of important biochemical information in situ within ex vivo tissue sections, to increase our understanding of disease mechanisms.

  20. Multi-Modal Imaging with a Toolbox of Influenza A Reporter Viruses.

    PubMed

    Tran, Vy; Poole, Daniel S; Jeffery, Justin J; Sheahan, Timothy P; Creech, Donald; Yevtodiyenko, Aleksey; Peat, Andrew J; Francis, Kevin P; You, Shihyun; Mehle, Andrew

    2015-10-13

    Reporter viruses are useful probes for studying multiple stages of the viral life cycle. Here we describe an expanded toolbox of fluorescent and bioluminescent influenza A reporter viruses. The enhanced utility of these tools enabled kinetic studies of viral attachment, infection, and co-infection. Multi-modal bioluminescence and positron emission tomography-computed tomography (PET/CT) imaging of infected animals revealed that antiviral treatment reduced viral load, dissemination, and inflammation. These new technologies and applications will dramatically accelerate in vitro and in vivo influenza virus studies.

  1. Developing a cross-platform port simulation system.

    SciTech Connect

    Nevins, M. R.

    1999-07-08

    With the advent of networked computer systems that connect disparate computer hardware and operating systems, it is important for port simulation systems to be able to run on a wide variety of computer platforms. This paper describes the design and implementation issues in reengineering the PORTSIM model in order to field the model to Windows-based systems as well as to Unix-based systems such as the Sun, Silicon Graphics, and HP workstations. The existing PORTSIM model was written to run on a Sun workstation running Unix. The model was initially implemented in MODSIM and C and utilized embedded SQL to retrieve port, ship, and cargo data from back-end OMCLE databases. Output reports, graphs, and tables for model results were written in C, utilizing third-party graphics libraries. This design and implementation worked well for the intended hardware platform and configuration, but as the number of model users grew and as the capabilities of the model expanded, a need developed to field the model to varying hardware configurations. This new requirement demanded that the existing design be modified to more easily allow for model fielding and maintenance. A phased approach is described that (1) identifies the existing model from which cross-platform development began, (2) delineates an intermediate client-server model that has been developed utilizing Java to allow for greater flexibility and ease in distributing and fielding the model, and (3) describes the final goals to be achieved in this development process.

  2. Differential network analysis from cross-platform gene expression data

    PubMed Central

    Zhang, Xiao-Fei; Ou-Yang, Le; Zhao, Xing-Ming; Yan, Hong

    2016-01-01

    Understanding how the structure of gene dependency network changes between two patient-specific groups is an important task for genomic research. Although many computational approaches have been proposed to undertake this task, most of them estimate correlation networks from group-specific gene expression data independently without considering the common structure shared between different groups. In addition, with the development of high-throughput technologies, we can collect gene expression profiles of same patients from multiple platforms. Therefore, inferring differential networks by considering cross-platform gene expression profiles will improve the reliability of network inference. We introduce a two dimensional joint graphical lasso (TDJGL) model to simultaneously estimate group-specific gene dependency networks from gene expression profiles collected from different platforms and infer differential networks. TDJGL can borrow strength across different patient groups and data platforms to improve the accuracy of estimated networks. Simulation studies demonstrate that TDJGL provides more accurate estimates of gene networks and differential networks than previous competing approaches. We apply TDJGL to the PI3K/AKT/mTOR pathway in ovarian tumors to build differential networks associated with platinum resistance. The hub genes of our inferred differential networks are significantly enriched with known platinum resistance-related genes and include potential platinum resistance-related genes. PMID:27677586

  3. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  4. An Open Source Business Model for Malaria

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, ‘closed’ publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more “open source” approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.’ President’s Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new

  5. The case for open-source software in drug discovery.

    PubMed

    DeLano, Warren L

    2005-02-01

    Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.

  6. The successes and challenges of open-source biopharmaceutical innovation.

    PubMed

    Allarakhia, Minna

    2014-05-01

    Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.

  7. An Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup

    PubMed Central

    Miron, Alina; Rogozan, Alexandrina; Ainouz, Samia; Bensrhair, Abdelaziz; Broggi, Alberto

    2015-01-01

    The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion. PMID:26076403

  8. A Flamelet Modeling Approach for Multi-Modal Combustion with Inhomogeneous Inlets

    NASA Astrophysics Data System (ADS)

    Perry, Bruce A.; Mueller, Michael E.

    2016-11-01

    Large eddy simulations (LES) of turbulent combustion often employ models that make assumptions about the underlying flame structure. For example, flamelet models based on both premixed and nonpremixed flame structures have been implemented successfully in a variety of contexts. While previous flamelet models have been developed to account for multi-modal combustion or complex inlet conditions, none have been developed that can account for both effects simultaneously. Here, a new approach is presented that extends a nonpremixed, two-mixture fraction approach for compositionally inhomogeneous inlet conditions to partially premixed combustion. The approach uses the second mixture fraction to indicate the locally dominant combustion mode based on flammability considerations and switch between premixed and nonpremixed combustion models as appropriate. To assess this approach, LES predictions for this and other flamelet-based models are compared to data from a turbulent piloted jet burner with compositionally inhomogeneous inlets, which has been shown experimentally to exhibit multi-modal combustion. This work was supported by the NSF Graduate Research Fellowship Program under Grant DGE 1148900.

  9. In vivo monitoring of structural and mechanical changes of tissue scaffolds by multi-modality imaging

    PubMed Central

    Park, Dae Woo; Ye, Sang-Ho; Jiang, Hong Bin; Dutta, Debaditya; Nonaka, Kazuhiro; Wagner, William R.; Kim, Kang

    2014-01-01

    Degradable tissue scaffolds are implanted to serve a mechanical role while healing processes occur and putatively assume the physiological load as the scaffold degrades. Mechanical failure during this period can be unpredictable as monitoring of structural degradation and mechanical strength changes at the implant site is not readily achieved in vivo, and non-invasively. To address this need, a multi-modality approach using ultrasound shear wave imaging (USWI) and photoacoustic imaging (PAI) for both mechanical and structural assessment in vivo was demonstrated with degradable poly(ester urethane)urea (PEUU) and polydioxanone (PDO) scaffolds. The fibrous scaffolds were fabricated with wet electrospinning, dyed with indocyanine green (ICG) for optical contrast in PAI, and implanted in the abdominal wall of 36 rats. The scaffolds were monitored monthly using USWI and PAI and were extracted at 0, 4, 8 and 12 wk for mechanical and histological assessment. The change in shear modulus of the constructs in vivo obtained by USWI correlated with the change in average Young's modulus of the constructs ex vivo obtained by compression measurements. The PEUU and PDO scaffolds exhibited distinctly different degradation rates and average PAI signal intensity. The distribution of PAI signal intensity also corresponded well to the remaining scaffolds as seen in explant histology. This evidence using a small animal abdominal wall repair model demonstrates that multi-modality imaging of USWI and PAI may allow tissue engineers to noninvasively evaluate concurrent mechanical stiffness and structural changes of tissue constructs in vivo for a variety of applications. PMID:24951048

  10. Study on electrodynamic sensor of multi-modality system for multiphase flow measurement

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Chen, Dixiang; Yang, Wuqiang

    2011-12-01

    Accurate measurement of multiphase flows, including gas/solids, gas/liquid, and liquid/liquid flows, is still challenging. In principle, electrical capacitance tomography (ECT) can be used to measure the concentration of solids in a gas/solids flow and the liquid (e.g., oil) fraction in a gas/liquid flow, if the liquid is non-conductive. Electrical resistance tomography (ERT) can be used to measure a gas/liquid flow, if the liquid is conductive. It has been attempted to use a dual-modality ECT/ERT system to measure both the concentration profile and the velocity profile by pixel-based cross correlation. However, this approach is not realistic because of the dynamic characteristics and the complexity of multiphase flows and the difficulties in determining the velocities by cross correlation. In this paper, the issues with dual modality ECT/ERT and the difficulties with pixel-based cross correlation will be discussed. A new adaptive multi-modality (ECT, ERT and electro-dynamic) sensor, which can be used to measure a gas/solids or gas/liquid flow, will be described. Especially, some details of the electrodynamic sensor of multi-modality system such as sensing electrodes optimum design, electrostatic charge amplifier, and signal processing will be discussed. Initial experimental results will be given.

  11. Study on electrodynamic sensor of multi-modality system for multiphase flow measurement.

    PubMed

    Deng, Xiang; Chen, Dixiang; Yang, Wuqiang

    2011-12-01

    Accurate measurement of multiphase flows, including gas/solids, gas/liquid, and liquid/liquid flows, is still challenging. In principle, electrical capacitance tomography (ECT) can be used to measure the concentration of solids in a gas/solids flow and the liquid (e.g., oil) fraction in a gas/liquid flow, if the liquid is non-conductive. Electrical resistance tomography (ERT) can be used to measure a gas/liquid flow, if the liquid is conductive. It has been attempted to use a dual-modality ECT/ERT system to measure both the concentration profile and the velocity profile by pixel-based cross correlation. However, this approach is not realistic because of the dynamic characteristics and the complexity of multiphase flows and the difficulties in determining the velocities by cross correlation. In this paper, the issues with dual modality ECT/ERT and the difficulties with pixel-based cross correlation will be discussed. A new adaptive multi-modality (ECT, ERT and electro-dynamic) sensor, which can be used to measure a gas/solids or gas/liquid flow, will be described. Especially, some details of the electrodynamic sensor of multi-modality system such as sensing electrodes optimum design, electrostatic charge amplifier, and signal processing will be discussed. Initial experimental results will be given.

  12. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival

    PubMed Central

    Phan, John H.; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D.

    2016-01-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance. PMID:27493999

  13. Multi-Modal Use of a Socially Directed Call in Bonobos

    PubMed Central

    Genty, Emilie; Clay, Zanna; Hobaiter, Catherine; Zuberbühler, Klaus

    2014-01-01

    ‘Contest hoots’ are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus). These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, ‘contest hoots’ were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction. PMID:24454745

  14. Treating psychological trauma in first responders: a multi-modal paradigm.

    PubMed

    Flannery, Raymond B

    2015-06-01

    Responding to critical incidents may result in 5.9-22% of first responders developing psychological trauma and posttraumatic stress disorder. These impacts may be physical, mental, and/or behavioral. This population remains at risk, given the daily occurrence of critical incidents. Current treatments, primarily focused on combat and rape victims, have included single and double interventions, which have proven helpful to some but not all victims and one standard of care has remained elusive. However, even though the need is established, research on the treatment interventions of first responders has been limited. Given the multiplicity of impacts from psychological trauma and the inadequacies of responder treatment intervention research thus far, this paper proposes a paradigmatic shift from single/double treatment interventions to a multi-modal approach to first responder victim needs. A conceptual framework based on psychological trauma is presented and possible multi-modal interventions selected from the limited, extant first responder research are utilized to illustrate how the approach would work and to encourage clinical and experimental research into first responder treatment needs.

  15. Aggregation for Computing Multi-Modal Stationary Distributions in 1-D Gene Regulatory Networks.

    PubMed

    Avcu, Neslihan; Pekergin, Nihal; Pekergin, Ferhan; Guzelis, Cuneyt

    2017-04-27

    This paper proposes aggregation-based, three-stage algorithms to overcome the numerical problems encountered in computing stationary distributions and mean first passage times for multi-modal birth-death processes of large state space sizes. The considered birth-death processes which are defined by Chemical Master Equations are used in modeling stochastic behavior of gene regulatory networks. Computing stationary probabilities for a multi-modal distribution from Chemical Master Equations is subject to have numerical problems due to the probability values running out of the representation range of the standard programming languages with the increasing size of the state space. The aggregation is shown to provide a solution to this problem by analyzing first reduced size subsystems in isolation and then considering the transitions between these subsystems. The proposed algorithms are applied to study the bimodal behavior of the lac operon of E. coli described with a one-dimensional birth-death model. Thus the determination of the entire parameter range of bimodality for the stochastic model of lac operon is achieved.

  16. Multi-modal signal acquisition using a synchronized wireless body sensor network in geriatric patients.

    PubMed

    Pflugradt, Maik; Mann, Steffen; Tigges, Timo; Görnig, Matthias; Orglmeister, Reinhold

    2016-02-01

    Wearable home-monitoring devices acquiring various biosignals such as the electrocardiogram, photoplethysmogram, electromyogram, respirational activity and movements have become popular in many fields of research, medical diagnostics and commercial applications. Especially ambulatory settings introduce still unsolved challenges to the development of sensor hardware and smart signal processing approaches. This work gives a detailed insight into a novel wireless body sensor network and addresses critical aspects such as signal quality, synchronicity among multiple devices as well as the system's overall capabilities and limitations in cardiovascular monitoring. An early sign of typical cardiovascular diseases is often shown by disturbed autonomic regulations such as orthostatic intolerance. In that context, blood pressure measurements play an important role to observe abnormalities like hypo- or hypertensions. Non-invasive and unobtrusive blood pressure monitoring still poses a significant challenge, promoting alternative approaches including pulse wave velocity considerations. In the scope of this work, the presented hardware is applied to demonstrate the continuous extraction of multi modal parameters like pulse arrival time within a preliminary clinical study. A Schellong test to diagnose orthostatic hypotension which is typically based on blood pressure cuff measurements has been conducted, serving as an application that might significantly benefit from novel multi-modal measurement principles. It is further shown that the system's synchronicity is as precise as 30 μs and that the integrated analog preprocessing circuits and additional accelerometer data provide significant advantages in ambulatory measurement environments.

  17. Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.

    PubMed

    Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B

    2015-02-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease.

    PubMed

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Zu, Chen; Nie, Feiping; Shen, Dinggang; Wu, Guorong

    2016-10-01

    Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer's disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.

  19. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  20. Multi-modal diffuse optical techniques for breast cancer neoadjuvant chemotherapy monitoring (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cochran, Jeffrey M.; Busch, David R.; Ban, Han Y.; Kavuri, Venkaiah C.; Schweiger, Martin J.; Arridge, Simon R.; Yodh, Arjun G.

    2017-02-01

    We present high spatial density, multi-modal, parallel-plate Diffuse Optical Tomography (DOT) imaging systems for the purpose of breast tumor detection. One hybrid instrument provides time domain (TD) and continuous wave (CW) DOT at 64 source fiber positions. The TD diffuse optical spectroscopy with PMT- detection produces low-resolution images of absolute tissue scattering and absorption while the spatially dense array of CCD-coupled detector fibers (108 detectors) provides higher-resolution CW images of relative tissue optical properties. Reconstruction of the tissue optical properties, along with total hemoglobin concentration and tissue oxygen saturation, is performed using the TOAST software suite. Comparison of the spatially-dense DOT images and MR images allows for a robust validation of DOT against an accepted clinical modality. Additionally, the structural information from co-registered MR images is used as a spatial prior to improve the quality of the functional optical images and provide more accurate quantification of the optical and hemodynamic properties of tumors. We also present an optical-only imaging system that provides frequency domain (FD) DOT at 209 source positions with full CCD detection and incorporates optical fringe projection profilometry to determine the breast boundary. This profilometry serves as a spatial constraint, improving the quality of the DOT reconstructions while retaining the benefits of an optical-only device. We present initial images from both human subjects and phantoms to display the utility of high spatial density data and multi-modal information in DOT reconstruction with the two systems.

  1. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  2. Modeling decision-making in single- and multi-modal medical images

    NASA Astrophysics Data System (ADS)

    Canosa, R. L.; Baum, K. G.

    2009-02-01

    This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging; therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with naÃve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors (false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350 milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model shows that lesion and error locations attract visual attention according to low-level image features such as color, luminance, and texture.

  3. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  4. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  5. MINERVA: a multi-modality plugin-based radiation therapy treatment planning system.

    PubMed

    Wemple, C A; Wessol, D E; Nigg, D W; Cogliati, J J; Milvich, M; Fredrickson, C M; Perkins, M; Harkin, G J; Hartmann-Siantar, C L; Lehmann, J; Flickinger, T; Pletcher, D; Yuan, A; DeNardo, G L

    2005-01-01

    Researchers at the INEEL, MSU, LLNL and UCD have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system, which can be used for planning and analysing several radiotherapy modalities, either singly or combined, using common treatment planning tools. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The design also facilitates the future integration of improved planning technologies. The code is being developed with the Java programming language for interoperability. The MINERVA design includes the image processing, model definition and data analysis modules with a central module to coordinate communication and data transfer. Dose calculation is performed by source and transport plugin modules, which communicate either directly through the database or through MINERVA's openly published, extensible markup language (XML)-based application programmer's interface (API). All internal data are managed by a database management system and can be exported to other applications or new installations through the API data formats. A full computation path has been established for molecular-targeted radiotherapy treatment planning, with additional treatment modalities presently under development.

  6. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage.

  7. A Multi-Modal Approach to Assessing Recovery in Youth Athletes Following Concussion

    PubMed Central

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-01-01

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one’s participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community. PMID:25285728

  8. A multi-modal approach to assessing recovery in youth athletes following concussion.

    PubMed

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-09-25

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one's participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community.

  9. Multi-modal contributions to detoxification of acute pharmacotoxicity by a triglyceride micro-emulsion.

    PubMed

    Fettiplace, Michael R; Lis, Kinga; Ripper, Richard; Kowal, Katarzyna; Pichurko, Adrian; Vitello, Dominic; Rubinstein, Israel; Schwartz, David; Akpa, Belinda S; Weinberg, Guy

    2015-01-28

    Triglyceride micro-emulsions such as Intralipid® have been used to reverse cardiac toxicity induced by a number of drugs but reservations about their broad-spectrum applicability remain because of the poorly understood mechanism of action. Herein we report an integrated mechanism of reversal of bupivacaine toxicity that includes both transient drug scavenging and a cardiotonic effect that couple to accelerate movement of the toxin away from sites of toxicity. We thus propose a multi-modal therapeutic paradigm for colloidal bio-detoxification whereby a micro-emulsion both improves cardiac output and rapidly ferries the drug away from organs subject to toxicity. In vivo and in silico models of toxicity were combined to test the contribution of individual mechanisms and reveal the multi-modal role played by the cardiotonic and scavenging actions of the triglyceride suspension. These results suggest a method to predict which drug toxicities are most amenable to treatment and inform the design of next-generation therapeutics for drug overdose.

  10. Multi-modal contributions to detoxification of acute pharmacotoxicity by a triglyceride micro-emulsion

    PubMed Central

    Fettiplace, Michael R; Lis, Kinga; Ripper, Richard; Kowal, Katarzyna; Pichurko, Adrian; Vitello, Dominic; Rubinstein, Israel; Schwartz, David; Akpa, Belinda S; Weinberg, Guy

    2014-01-01

    Triglyceride micro-emulsions such as Intralipid® have been used to reverse cardiac toxicity induced by a number of drugs but reservations about their broad-spectrum applicability remain because of the poorly understood mechanism of action. Herein we report an integrated mechanism of reversal of bupivacaine toxicity that includes both transient drug scavenging and a cardiotonic effect that couple to accelerate movement of the toxin away from sites of toxicity. We thus propose a multi-modal therapeutic paradigm for colloidal bio-detoxification whereby a micro-emulsion both improves cardiac output and rapidly ferries the drug away from organs subject to toxicity. In vivo and in silico models of toxicity were combined to test the contribution of individual mechanisms and reveal the multi-modal role played by the cardiotonic and scavenging actions of the triglyceride suspension. These results suggest a method to predict which drug toxicities are most amenable to treatment and inform the design of next-generation therapeutics for drug overdose. PMID:25483426

  11. Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Zu, Chen; Nie, Feiping; Shen, Dinggang; Wu, Guorong

    2017-01-01

    Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer’s disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data. PMID:28386606

  12. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  13. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  14. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  15. ENKI - An Open Source environmental modelling platfom

    NASA Astrophysics Data System (ADS)

    Kolberg, S.; Bruland, O.

    2012-04-01

    The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface

  16. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires

  17. OpenADR Open Source Toolkit: Developing Open Source Software for the Smart Grid

    SciTech Connect

    McParland, Charles

    2011-02-01

    Demand response (DR) is becoming an increasingly important part of power grid planning and operation. The advent of the Smart Grid, which mandates its use, further motivates selection and development of suitable software protocols to enable DR functionality. The OpenADR protocol has been developed and is being standardized to serve this goal. We believe that the development of a distributable, open source implementation of OpenADR will benefit this effort and motivate critical evaluation of its capabilities, by the wider community, for providing wide-scale DR services

  18. Interactive, open source, travel time scenario modelling: tools to facilitate participation in health service access analysis.

    PubMed

    Fisher, Rohan; Lassa, Jonatan

    2017-04-18

    Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and

  19. Open-Source as a strategy for operational software - the case of Enki

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has

  20. Open Source Initiative Powers Real-Time Data Streams

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  1. The open-source movement: an introduction for forestry professionals

    Treesearch

    Patrick Proctor; Paul C. Van Deusen; Linda S. Heath; Jeffrey H. Gove

    2005-01-01

    In recent years, the open-source movement has yielded a generous and powerful suite of software and utilities that rivals those developed by many commercial software companies. Open-source programs are available for many scientific needs: operating systems, databases, statistical analysis, Geographic Information System applications, and object-oriented programming....

  2. Open Source for Knowledge and Learning Management: Strategies beyond Tools

    ERIC Educational Resources Information Center

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2007-01-01

    In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies.…

  3. Open Source as Appropriate Technology for Global Education

    ERIC Educational Resources Information Center

    Carmichael, Patrick; Honour, Leslie

    2002-01-01

    Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing…

  4. Getting Open Source Software into Schools: Strategies and Challenges

    ERIC Educational Resources Information Center

    Hepburn, Gary; Buley, Jan

    2006-01-01

    In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…

  5. Can open-source R&D reinvigorate drug research?

    PubMed

    Munos, Bernard

    2006-09-01

    The low number of novel therapeutics approved by the US FDA in recent years continues to cause great concern about productivity and declining innovation. Can open-source drug research and development, using principles pioneered by the highly successful open-source software movement, help revive the industry?

  6. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    ERIC Educational Resources Information Center

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  7. Integrating an Automatic Judge into an Open Source LMS

    ERIC Educational Resources Information Center

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  8. Getting Open Source Software into Schools: Strategies and Challenges

    ERIC Educational Resources Information Center

    Hepburn, Gary; Buley, Jan

    2006-01-01

    In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…

  9. Open Source as Appropriate Technology for Global Education

    ERIC Educational Resources Information Center

    Carmichael, Patrick; Honour, Leslie

    2002-01-01

    Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing…

  10. Open-Source Unionism: New Workers, New Strategies

    ERIC Educational Resources Information Center

    Schmid, Julie M.

    2004-01-01

    In "Open-Source Unionism: Beyond Exclusive Collective Bargaining," published in fall 2002 in the journal Working USA, labor scholars Richard B. Freeman and Joel Rogers use the term "open-source unionism" to describe a form of unionization that uses Web technology to organize in hard-to-unionize workplaces. Rather than depend on the traditional…

  11. Open Source Course Management Systems: A Case Study

    ERIC Educational Resources Information Center

    Remy, Eric

    2005-01-01

    In Fall 2003, Randolph-Macon Woman's College rolled out Claroline, an Open Source course management system for all the classes on campus. This document will cover some background on both Open Source in general and course management systems in specific, discuss technical challenges in the introduction and integration of the system and give some…

  12. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    ERIC Educational Resources Information Center

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  13. Open-Source Data and the Study of Homicide.

    PubMed

    Parkin, William S; Gruenewald, Jeff

    2015-07-20

    To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events. © The Author(s) 2015.

  14. Open Source Library Management Systems: A Multidimensional Evaluation

    ERIC Educational Resources Information Center

    Balnaves, Edmund

    2008-01-01

    Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…

  15. Open Source for Knowledge and Learning Management: Strategies beyond Tools

    ERIC Educational Resources Information Center

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2007-01-01

    In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies.…

  16. Migrations of the Mind: The Emergence of Open Source Education

    ERIC Educational Resources Information Center

    Glassman, Michael; Bartholomew, Mitchell; Jones, Travis

    2011-01-01

    The authors describe an Open Source approach to education. They define Open Source Education (OSE) as a teaching and learning framework where the use and presentation of information is non-hierarchical, malleable, and subject to the needs and contributions of students as they become "co-owners" of the course. The course transforms itself into an…

  17. Integrating an Automatic Judge into an Open Source LMS

    ERIC Educational Resources Information Center

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  18. Automatic quantification of multi-modal rigid registration accuracy using feature detectors

    NASA Astrophysics Data System (ADS)

    Hauler, F.; Furtado, H.; Jurisic, M.; Polanec, S. H.; Spick, C.; Laprie, A.; Nestle, U.; Sabatini, U.; Birkfellner, W.

    2016-07-01

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error—mTRE—based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  19. Automatic quantification of multi-modal rigid registration accuracy using feature detectors.

    PubMed

    Hauler, F; Furtado, H; Jurisic, M; Polanec, S H; Spick, C; Laprie, A; Nestle, U; Sabatini, U; Birkfellner, W

    2016-07-21

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error-mTRE-based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  20. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  1. Multi-modal hard x-ray imaging with a laboratory source using selective reflection from a mirror.

    PubMed

    Pelliccia, Daniele; Paganin, David M

    2014-04-01

    Multi-modal hard x-ray imaging sensitive to absorption, refraction, phase and scattering contrast is demonstrated using a simple setup implemented with a laboratory source. The method is based on selective reflection at the edge of a mirror, aligned to partially reflect a pencil x-ray beam after its interaction with a sample. Quantitative scattering contrast from a test sample is experimentally demonstrated using this method. Multi-modal imaging of a house fly (Musca domestica) is shown as proof of principle of the technique for biological samples.

  2. Multi-modal hard x-ray imaging with a laboratory source using selective reflection from a mirror

    PubMed Central

    Pelliccia, Daniele; Paganin, David M.

    2014-01-01

    Multi-modal hard x-ray imaging sensitive to absorption, refraction, phase and scattering contrast is demonstrated using a simple setup implemented with a laboratory source. The method is based on selective reflection at the edge of a mirror, aligned to partially reflect a pencil x-ray beam after its interaction with a sample. Quantitative scattering contrast from a test sample is experimentally demonstrated using this method. Multi-modal imaging of a house fly (Musca domestica) is shown as proof of principle of the technique for biological samples. PMID:24761297

  3. Incidental Acquisition of Foreign Language Vocabulary through Brief Multi-Modal Exposure

    PubMed Central

    Bisson, Marie-Josée; van Heuven, Walter J. B.; Conklin, Kathy; Tunney, Richard J.

    2013-01-01

    First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed. PMID:23579363

  4. Band-edge engineering for controlled multi-modal nanolasing in plasmonic superlattices

    NASA Astrophysics Data System (ADS)

    Wang, Danqing; Yang, Ankun; Wang, Weijia; Hua, Yi; Schaller, Richard D.; Schatz, George C.; Odom, Teri W.

    2017-09-01

    Single band-edge states can trap light and function as high-quality optical feedback for microscale lasers and nanolasers. However, access to more than a single band-edge mode for nanolasing has not been possible because of limited cavity designs. Here, we describe how plasmonic superlattices—finite-arrays of nanoparticles (patches) grouped into microscale arrays—can support multiple band-edge modes capable of multi-modal nanolasing at programmed emission wavelengths and with large mode spacings. Different lasing modes show distinct input-output light behaviour and decay dynamics that can be tailored by nanoparticle size. By modelling the superlattice nanolasers with a four-level gain system and a time-domain approach, we reveal that the accumulation of population inversion at plasmonic hot spots can be spatially modulated by the diffractive coupling order of the patches. Moreover, we show that symmetry-broken superlattices can sustain switchable nanolasing between a single mode and multiple modes.

  5. Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Zhang, Minliang; Chen, Qian

    2016-05-01

    We demonstrate a simple and cost-effective programmable aperture microscope to realize multi-modal computational imaging by integrating a programmable liquid crystal display (LCD) into a conventional wide-field microscope. The LCD selectively modulates the light distribution at the rear aperture of the microscope objective, allowing numerous imaging modalities, such as bright field, dark field, differential phase contrast, quantitative phase imaging, multi-perspective imaging, and full resolution light field imaging to be achieved and switched rapidly in the same setup, without requiring specialized hardwares and any moving parts. We experimentally demonstrate the success of our method by imaging unstained cheek cells, profiling microlens array, and changing perspective views of thick biological specimens. The post-exposure refocusing of a butterfly mouthpart and RFP-labeled dicot stem cross-section is also presented to demonstrate the full resolution light field imaging capability of our system for both translucent and fluorescent specimens.

  6. Incidental acquisition of foreign language vocabulary through brief multi-modal exposure.

    PubMed

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2013-01-01

    First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed.

  7. Multi-modal vibration energy harvesting approach based on nonlinear oscillator arrays under magnetic levitation

    NASA Astrophysics Data System (ADS)

    Abed, I.; Kacem, N.; Bouhaddi, N.; Bouazizi, M. L.

    2016-02-01

    We propose a multi-modal vibration energy harvesting approach based on arrays of coupled levitated magnets. The equations of motion which include the magnetic nonlinearity and the electromagnetic damping are solved using the harmonic balance method coupled with the asymptotic numerical method. A multi-objective optimization procedure is introduced and performed using a non-dominated sorting genetic algorithm for the cases of small magnet arrays in order to select the optimal solutions in term of performances by bringing the eigenmodes close to each other in terms of frequencies and amplitudes. Thanks to the nonlinear coupling and the modal interactions even for only three coupled magnets, the proposed method enable harvesting the vibration energy in the operating frequency range of 4.6-14.5 Hz, with a bandwidth of 190% and a normalized power of 20.2 {mW} {{cm}}-3 {{{g}}}-2.

  8. A Distance Measure Comparison to Improve Crowding in Multi-Modal Problems.

    SciTech Connect

    D. Todd VOllmer; Terence Soule; Milos Manic

    2010-08-01

    Solving multi-modal optimization problems are of interest to researchers solving real world problems in areas such as control systems and power engineering tasks. Extensions of simple Genetic Algorithms, particularly types of crowding, have been developed to help solve these types of problems. This paper examines the performance of two distance measures, Mahalanobis and Euclidean, exercised in the processing of two different crowding type implementations against five minimization functions. Within the context of the experiments, empirical evidence shows that the statistical based Mahalanobis distance measure when used in Deterministic Crowding produces equivalent results to a Euclidean measure. In the case of Restricted Tournament selection, use of Mahalanobis found on average 40% more of the global optimum, maintained a 35% higher peak count and produced an average final best fitness value that is 3 times better.

  9. Control of an axisymmetric turbulent jet by multi-modal excitation

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Rice, Edward J.; Reshotko, Eli

    1991-01-01

    Experimental measurements of naturally occurring instability modes in the axisymmetric shear layer of high Reynolds number turbulent jet are presented. The region up to the end of the potential core was dominated by the axisymmetric mode. The azimuthal modes dominated only downstream of the potential core region. The energy content of the higher order modes (m is greater than 1) was significantly lower than that of the axisymmetric and m = + or - 1 modes. Under optimum conditions, two-frequency excitation (both at m = 0) was more effective than single frequency excitation (at m = 0) for jet spreading enhancement. An extended region of the jet was controlled by forcing combinations of both axisymmetric (m = 0) and helical modes (m = + or - 1). Higher spreading rates were obtained when multi-modal forcing was applied.

  10. Multi-Modality fiducial marker for validation of registration of medical images with histology

    NASA Astrophysics Data System (ADS)

    Shojaii, Rushin; Martel, Anne L.

    2010-03-01

    A multi-modality fiducial marker is presented in this work, which can be used for validating the correlation of histology images with medical images. This marker can also be used for landmark-based image registration. Seven different fiducial markers including a catheter, spaghetti, black spaghetti, cuttlefish ink, and liquid iron are implanted in a mouse specimen and then investigated based on visibility, localization, size, and stability. The black spaghetti and the mixture of cuttlefish ink and flour are shown to be the most suitable markers. Based on the size of the markers, black spaghetti is more suitable for big specimens and the mixture of the cuttlefish ink, flour, and water injected in a catheter is more suitable for small specimens such as mouse tumours. These markers are visible on medical images and also detectable on histology and optical images of the tissue blocks. The main component in these agents which enhances the contrast is iron.

  11. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  12. Multi-modal digital holographic microscopy for wide-field fluorescence and 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Quan, Xiangyu; Xia, Peng; Matoba, Osamu; Nitta, Koichi; Awatsuji, Yasuhiro

    2016-03-01

    Multi-modal digital holographic microscopy is a combination of epifluorescence microscopy and digital holographic microscopy, the main function of which is to obtain images from fluorescence intensity and quantified phase contrasts, simultaneously. The proposed system is mostly beneficial to biological studies, with the reason that often the studies are depending on fluorescent labeling techniques to detect certain intracellular molecules, while phase information reflecting properties of unstained transparent elements. This paper is presenting our latest researches on applications such as randomly moving micro-fluorescent beads and living cells of Physcomitrella patens. The experiments are succeeded on obtaining a succession of wide-field fluorescent images and holograms from micro-beads, and different depths focusing is realized via numerical reconstruction. Living cells of Physcomitrella patens are recorded in the static manner, the reconstruction distance indicates thickness of cellular structure. These results are implementing practical applications toward many biomedical science researches.

  13. Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.

    2014-03-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.

  14. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  15. Automatic trajectory planning of DBS neurosurgery from multi-modal MRI datasets.

    PubMed

    Bériault, Silvain; Al Subaie, Fahd; Mok, Kelvin; Sadikot, Abbas F; Pike, G Bruce

    2011-01-01

    We propose an automated method for preoperative trajectory planning of deep brain stimulation image-guided neurosurgery. Our framework integrates multi-modal MRI analysis (T1w, SWI, TOF-MRA) to determine an optimal trajectory to DBS targets (subthalamic nuclei and globus pallidus interna) while avoiding critical brain structures for prevention of hemorrhages, loss of function and other complications. Results show that our method is well suited to aggregate many surgical constraints and allows the analysis of thousands of trajectories in less than 1/10th of the time for manual planning. Finally, a qualitative evaluation of computed trajectories resulted in the identification of potential new constraints, which are not addressed in the current literature, to better mimic the decision-making of the neurosurgeon during DBS planning.

  16. Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome

    PubMed Central

    Choudhry, Netan; Rao, Rajesh C.

    2015-01-01

    Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849

  17. Multi-modal miniaturized microscope: successful merger of optical, MEMS, and electronic technologies

    NASA Astrophysics Data System (ADS)

    Tkaczyk, Tomasz S.; Rogers, Jeremy D.; Rahman, Mohammed; Christenson, Todd C.; Gaalema, Stephen; Dereniak, Eustace L.; Richards-Kortum, Rebecca; Descour, Michael R.

    2005-12-01

    The multi-modal miniature microscope (4M) device for early cancer detection is based on micro-optical table (MOT) platform which accommodates on a chip: optical, micro-mechanical, and electronic components. The MOT is a zeroalignment optical-system concept developed for a wide variety of opto-mechanical instruments. In practical terms this concept translates into assembly errors that are smaller than the tolerances on the performance of the optical system. This paper discusses all major system elements: optical system, custom high speed CMOS detector and comb drive actuator. It also points to mutual relations between different technologies. The hybrid sol-gel lenses, their fabrication and assembling techniques, optical system parameters, and various operation modes are also discussed. A particularly interesting mode is a structured illumination technique that delivers confocal-imaging capabilities and may be used for optical sectioning. Structured illumination is produced with LIGA fabricated actuator scanning in resonance and reconstructed using sine approximation algorithm.

  18. Panel labels extraction from multi-panel figures for facilitating multi-modal information retrieval

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Dong, Le; Liang, Yan; He, Ling; Feng, Ning

    2015-07-01

    The association of subfigures in the multi-panel figure with related text in the accompanying caption and research article is necessary for the implementation of multi-modal information retrieval system. The panel labels in the multipanel figure are used as a source for making this kind of association. In this paper, we propose a novel method for the detection of panel labels in the multi-panel figures. The proposed method uses segmentation of multi-panel figure and its accompanying caption into subfigures and sub captions, respectively, as a preprocessing step. Next, the features of panel label, i.e., area and its distance from the borders in the upper left most subfigure of the multi panel figure are computed. These features are then used for detecting panel labels located in the rest of subfigures of the same multi-panel figure. Experiments on multi-panel figures selected from imageCLEF2013 dataset show promising results.

  19. Control of an axisymmetric turbulent jet by multi-modal excitation

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Rice, Edward J.; Reshotko, Eli

    1991-01-01

    Experimental measurements of naturally occurring instability modes in the axisymmetric shear layer of high Reynolds number turbulent jet are presented. The region up to the end of the potential core was dominated by the axisymmetric mode. The azimuthal modes dominated only downstream of the potential core region. The energy content of the higher order modes (m is greater than 1) was significantly lower than that of the axisymmeteric and m = + or - 1 modes. Under optimum conditions, two-frequency excitation (both at m = 0) was more effective than single frequency excitation (at m = 0) for jet spreading enhancement. An extended region of the jet was controlled by forcing combinations of both axisymmetric (m = 0) and helical modes (m = + or - 1). Higher spreading rates were obtained when multi-modal forcing was applied.

  20. Control of an axisymmetric turbulent jet by multi-modal excitation

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Rice, Edward J.; Reshotko, Eli

    1991-01-01

    Experimental measurements of naturally occurring instability modes in the axisymmetric shear layer of high Reynolds number turbulent jet are presented. The region up to the end of the potential core was dominated by the axisymmetric mode. The azimuthal modes dominated only downstream of the potential core region. The energy content of the higher order modes (m is greater than 1) was significantly lower than that of the axisymmetric and m = + or - 1 modes. Under optimum conditions, two-frequency excitation (both at m = 0) was more effective than single frequency excitation (at m = 0) for jet spreading enhancement. An extended region of the jet was controlled by forcing combinations of both axisymmetric (m = 0) and helical modes (m = + or - 1). Higher spreading rates were obtained when multi-modal forcing was applied.

  1. A multi-modal approach for activity classification and fall detection

    NASA Astrophysics Data System (ADS)

    Castillo, José Carlos; Carneiro, Davide; Serrano-Cuerda, Juan; Novais, Paulo; Fernández-Caballero, Antonio; Neves, José

    2014-04-01

    The society is changing towards a new paradigm in which an increasing number of old adults live alone. In parallel, the incidence of conditions that affect mobility and independence is also rising as a consequence of a longer life expectancy. In this paper, the specific problem of falls of old adults is addressed by devising a technological solution for monitoring these users. Video cameras, accelerometers and GPS sensors are combined in a multi-modal approach to monitor humans inside and outside the domestic environment. Machine learning techniques are used to detect falls and classify activities from accelerometer data. Video feeds and GPS are used to provide location inside and outside the domestic environment. It results in a monitoring solution that does not imply the confinement of the users to a closed environment.

  2. Dynamic Graph Analytic Framework (DYGRAF): greater situation awareness through layered multi-modal network analysis

    NASA Astrophysics Data System (ADS)

    Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.

    2012-06-01

    Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.

  3. Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)

    SciTech Connect

    Aiken, R.J.; Carlson, R.A.; Foster, I.T.

    1997-01-01

    The research and education (R&E) community requires persistent and scaleable network infrastructure to concurrently support production and research applications as well as network research. In the past, the R&E community has relied on supporting parallel network and end-node infrastructures, which can be very expensive and inefficient for network service managers and application programmers. The grand challenge in networking is to provide support for multiple, concurrent, multi-layer views of the network for the applications and the network researchers, and to satisfy the sometimes conflicting requirements of both while ensuring one type of traffic does not adversely affect the other. Internet and telecommunications service providers will also benefit from a multi-modal infrastructure, which can provide smoother transitions to new technologies and allow for testing of these technologies with real user traffic while they are still in the pre-production mode. The authors proposed approach requires the use of as much of the same network and end system infrastructure as possible to reduce the costs needed to support both classes of activities (i.e., production and research). Breaking the infrastructure into segments and objects (e.g., routers, switches, multiplexors, circuits, paths, etc.) gives the capability to dynamically construct and configure the virtual active networks to address these requirements. These capabilities must be supported at the campus, regional, and wide-area network levels to allow for collaboration by geographically dispersed groups. The Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet) described in this report is an initial architecture and framework designed to identify and support the capabilities needed for the proposed combined infrastructure and to address related research issues.

  4. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.

    PubMed

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.

  5. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    PubMed

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  6. Multi-Modal Treatment Approach to Painful Rib Syndrome: Case Series and Review of the Literature.

    PubMed

    Germanovich, Andrew; Ferrante, Francis Michael

    2016-03-01

    Mechanical chest wall pain is a common presenting complaint in the primary care office, emergency room, and specialty clinic. Diagnostic testing is often expensive due to similar presenting symptoms that may involve the heart or lungs. Since the chest wall biomechanics are poorly understood by many clinicians, few effective treatments are offered to patients with rib-related acute pain, which may lead to chronic pain. This case series and literature review illustrates biomechanics involved in the pathogenesis of rib-related chest wall pain and suggests an effective multi-modal treatment plan using interventional techniques with emphasis on manual manipulative techniques. Case series and literature review. Pain clinic in an academic medical center. This is a case series of 3 patients diagnosed with painful rib syndrome using osteopathic palpatory physical examination techniques. Ultrasound-guided intercostal nerve blocks were followed by manual manipulation of mechanically displaced ribs as a part of our multi-modal treatment plan. A review of the literature was undertaken to clarify nomenclature used in the description of rib-related pain, to describe the biomechanics involved in the pathogenesis of mechanical rib pain, and to illustrate the use of effective manual manipulation techniques. This review is introductory and not a complete review of all manual or interventional pain management techniques applicable to the treatment of mechanical rib-related pain. Manual diagnostic and therapeutic skills can be learned by physicians to treat biomechanically complex rib-related chest wall pain in combination with interventional image-guided techniques. Pain physicians should learn certain basic manual manipulation skills both for diagnostic and therapeutic purposes.

  7. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    PubMed

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS.

  8. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION

    PubMed Central

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    2016-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement. PMID:27668065

  9. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  10. Multi-modal ECG Holter system for sleep-disordered breathing screening: a validation study.

    PubMed

    Poupard, Laurent; Mathieu, Marc; Goldman, Michael; Chouchou, Florian; Roche, Frédéric

    2012-09-01

    The high prevalence of sleep disordered breathing (SDB) among heart diseases patients becomes increasingly recognized. A reliable exploring tool of SDB well adapted to cardiologists practice would be very useful for the management of these patients. We assessed a novel multi-modal electrocardiogram (ECG) Holter which incorporated both thoracic impedance and pulse oximetry signals. We compared in a home setting, a standard condition for Holter recordings, results from the novel device to a classical ambulatory polygraph in subjects with suspected SDB. The analysis of cardiac arrhythmias in relationship with SDB is also presented. A total of 118 patients clinically suspected of having SDB were evaluated (mean age 57 ± 14 years, mean body mass index [BMI] 32 ± 6 kg/m(2)). The new device allows calculating a new index called thoracic impedance (TI) disturbance index (TIDI+) evaluated from TI and SpO(2) signals recorded from a Holter monitor. In the population under study, 93% had more than 70% of usable TI signal and 95% had more than 90% for SpO(2) during sleep time recording. Screening performance results based on automatic analysis is accurate: TIDI + demonstrates a high level of sensitivity (96.8%), specificity (72.3%) as well as positive (82.4%) and negative (94.4%) predictive value for the detection of SDB. Moreover, detection of SDB periods permits us to observe a possible respiratory association of several nocturnal arrhythmias. The multi-modal Holter should be considered as a valuable evaluating tool for SDB screening and as a case selection technique for facilitating access to a full polysomnography for severe cases. Moreover, it offers a unique opportunity to study arrhythmia consequences with both respiratory and hypoxia disturbances.

  11. NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences.

    PubMed

    Riva, Giuseppe; Gaggioli, Andrea; Villani, Daniela; Preziosa, Alessandra; Morganti, Francesca; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca

    2007-01-01

    In the past decade, the use of virtual reality for clinical and research applications has become more widespread. However, the diffusion of this approach is still limited by three main issues: poor usability, lack of technical expertise among clinical professionals, and high costs. To address these challenges, we introduce NeuroVR (http://www.neurovr.org--http://www.neurotiv.org), a cost-free virtual reality platform based on open-source software, that allows non-expert users to adapt the content of a pre-designed virtual environment to meet the specific needs of the clinical or experimental setting. Using the NeuroVR Editor, the user can choose the appropriate psychological stimuli/stressors from a database of objects (both 2D and 3D) and videos, and easily place them into the virtual environment. The edited scene can then be visualized in the NeuroVR Player using either immersive or non-immersive displays. Currently, the NeuroVR library includes different virtual scenes (apartment, office, square, supermarket, park, classroom, etc.), covering two of the most studied clinical applications of VR: specific phobias and eating disorders. The NeuroVR Editor is based on Blender (http://www.blender.org), the open source, cross-platform suite of tools for 3D creation, and is available as a completely free resource. An interesting feature of the NeuroVR Editor is the possibility to add new objects to the database. This feature allows the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, i.e., by using photos or movies of objects/people that are part of the patient's daily life, thereby improving the efficacy of the exposure. The NeuroVR platform runs on standard personal computers with Microsoft Windows; the only requirement for the hardware is related to the graphics card, which must support OpenGL.

  12. Open Genetic Code: on open source in the life sciences.

    PubMed

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  13. Learning by Doing: How to Develop a Cross-Platform Web App

    ERIC Educational Resources Information Center

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  14. Learning by Doing: How to Develop a Cross-Platform Web App

    ERIC Educational Resources Information Center

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  15. Making Dynamic Digital Maps Cross-Platform and WWW Capable

    NASA Astrophysics Data System (ADS)

    Condit, C. D.

    2001-05-01

    High-quality color geologic maps are an invaluable information resource for educators, students and researchers. However, maps with large datasets that include images, or various types of movies, in addition to site locations where analytical data has been collected, are difficult to publish in a format that facilitates their easy access, distribution and use. The development of capable desktop computers and object oriented graphical programming environments has facilitated publication of such data sets in an encapsulated form. The original Dynamic Digital Map (DDM) programs, developed using the Macintosh based SuperCard programming environment, exemplified this approach, in which all data are included in a single package designed so that display and access to the data did not depend on proprietary programs. These DDMs were aimed for ease of use, and allowed data to be displayed by several methods, including point-and-click at icons pin-pointing sample (or image) locations on maps, and from clicklists of sample or site numbers. Each of these DDMs included an overview and automated tour explaining the content organization and program use. This SuperCard development culminated in a "DDM Template", which is a SuperCard shell into which SuperCard users could insert their own content and thus create their own DDMs, following instructions in an accompanying "DDM Cookbook" (URL http://www.geo.umass.edu/faculty/condit/condit2.html). These original SuperCard-based DDMs suffered two critical limitations: a single user platform (Macintosh) and, although they can be downloaded from the web, their use lacked an integration into the WWW. Over the last eight months I have been porting the DDM technology to MetaCard, which is aggressively cross-platform (11 UNIX dialects, WIN32 and Macintosh). The new MetaCard DDM is redesigned to make the maps and images accessible either from CD or the web, using the "LoadNGo" concept. LoadNGo allows the user to download the stand-alone DDM

  16. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  17. Interactive multicentre teleconferences using open source software in a team of thoracic surgeons.

    PubMed

    Ito, Kazuhiro; Shimada, Junichi; Katoh, Daishiro; Nishimura, Motohiro; Yanada, Masashi; Okada, Satoru; Ishihara, Shunta; Ichise, Kaori

    2012-12-01

    Real-time consultation between a team of thoracic surgeons is important for the management of difficult cases. We established a system for interactive teleconsultation between multiple sites, based on open-source software. The graphical desktop-sharing system VNC (virtual network computing) was used for remotely controlling another computer. An image-processing package (OsiriX) was installed on the server to share the medical images. We set up a voice communication system using Voice Chatter, a free, cross-platform voice communication application. Four hospitals participated in the trials. One was connected by gigabit ethernet, one by WiMAX and one by ADSL. Surgeons at three of the sites found that it was comfortable to view images and consult with each other using the teleconferencing system. However, it was not comfortable using the client that connected via WiMAX, because of dropped frames. Apart from the WiMAX connection, the VNC-based screen-sharing system transferred the clinical images efficiently and in real time. We found the screen-sharing software VNC to be a good application for medical image interpretation, especially for a team of thoracic surgeons using multislice CT scans.

  18. Real Space Multigrid (RMG) Open Source Software Suite for Multi-Petaflops Electronic Structure Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry; Li, Yan

    RMG is a cross platform open source package for ab initio electronic structure calculations that uses real-space grids, multigrid pre-conditioning, and subspace diagonalization to solve the Kohn-Sham equations. The code has been successfully used for a wide range of problems ranging from complex bulk materials to multifunctional electronic devices and biological systems. RMG makes efficient use of GPU accelerators, if present, but does not require them. Recent work has extended GPU support to systems with multiple GPU's per computational node, as well as optimized both CPU and GPU memory usage to enable large problem sizes, which are no longer limited by the memory of the GPU board. Additional enhancements include increased portability, scalability and performance. New versions of the code are regularly released at sourceforge.net/projects/rmgdft/. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms.

  19. An open source hydroeconomic model for California's water supply system: PyVIN

    NASA Astrophysics Data System (ADS)

    Dogan, M. S.; White, E.; Herman, J. D.; Hart, Q.; Merz, J.; Medellin-Azuara, J.; Lund, J. R.

    2016-12-01

    Models help operators and decision makers explore and compare different management and policy alternatives, better allocate scarce resources, and predict the future behavior of existing or proposed water systems. Hydroeconomic models are useful tools to increase benefits or decrease costs of managing water. Bringing hydrology and economics together, these models provide a framework for different disciplines that share similar objectives. This work proposes a new model to evaluate operation and adaptation strategies under existing and future hydrologic conditions for California's interconnected water system. This model combines the network structure of CALVIN, a statewide optimization model for California's water infrastructure, along with an open source solver written in the Python programming language. With the flexibilities of the model, reservoir operations, including water supply and hydropower, groundwater pumping, and the Delta water operations and requirements can now be better represented. Given time series of hydrologic inputs to the model, typical outputs include urban, agricultural and wildlife refuge water deliveries and shortage costs, conjunctive use of surface and groundwater systems, and insights into policy and management decisions, such as capacity expansion and groundwater management policies. Water market operations also represented in the model, allocating water from lower-valued users to higher-valued users. PyVIN serves as a cross-platform, extensible model to evaluate systemwide water operations. PyVIN separates data from the model structure, enabling model to be easily applied to other parts of the world where water is a scarce resource.

  20. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    ERIC Educational Resources Information Center

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  1. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    ERIC Educational Resources Information Center

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  2. Providing University Education in Physical Geography across the South Pacific Islands: Multi-Modal Course Delivery and Student Grade Performance

    ERIC Educational Resources Information Center

    Terry, James P.; Poole, Brian

    2012-01-01

    Enormous distances across the vast South Pacific hinder student access to the main Fiji campus of the regional tertiary education provider, the University of the South Pacific (USP). Fortunately, USP has been a pioneer in distance education (DE) and promotes multi-modal delivery of programmes. Geography has embraced DE, but doubts remain about…

  3. Multi-Yield Radio Frequency Countermeasures Investigations and Development (MYRIAD) Task Order 006: Integrated Multi-Modal RF Sensing

    DTIC Science & Technology

    2012-08-01

    Multi-Modal RF Sensing Mark L. Brockman Dynetics , Inc. Steven Kay and Quan Ding University of Rhode Island Sean M. O’Rourke and A. Lee... Dynetics , Inc.) Steven Kay and Quan Ding (University of Rhode Island) Sean M. O’Rourke and A. Lee Swindlehurst (University of California, Irvine

  4. Effective Beginning Handwriting Instruction: Multi-Modal, Consistent Format for 2 Years, and Linked to Spelling and Composing

    ERIC Educational Resources Information Center

    Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.

    2017-01-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N = 16 first graders,…

  5. Effective Beginning Handwriting Instruction: Multi-Modal, Consistent Format for 2 Years, and Linked to Spelling and Composing

    ERIC Educational Resources Information Center

    Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.

    2017-01-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N = 16 first graders,…

  6. Hopc: a Novel Similarity Metric Based on Geometric Structural Properties for Multi-Modal Remote Sensing Image Matching

    NASA Astrophysics Data System (ADS)

    Ye, Yuanxin; Shen, Li

    2016-06-01

    Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

  7. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  8. Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation.

    PubMed

    Wang, Hongzhi; Yushkevich, Paul A

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  9. Guidelines for the implementation of an open source information system

    SciTech Connect

    Doak, J.; Howell, J.A.

    1995-08-01

    This work was initially performed for the International Atomic Energy Agency (IAEA) to help with the Open Source Task of the 93 + 2 Initiative; however, the information should be of interest to anyone working with open sources. The authors cover all aspects of an open source information system (OSIS) including, for example, identifying relevant sources, understanding copyright issues, and making information available to analysts. They foresee this document as a reference point that implementors of a system could augment for their particular needs. The primary organization of this document focuses on specific aspects, or components, of an OSIS; they describe each component and often make specific recommendations for its implementation. This document also contains a section discussing the process of collecting open source data and a section containing miscellaneous information. The appendix contains a listing of various providers, producers, and databases that the authors have come across in their research.

  10. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  11. Learning from hackers: open-source clinical trials.

    PubMed

    Dunn, Adam G; Day, Richard O; Mandl, Kenneth D; Coiera, Enrico

    2012-05-02

    Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. A similar gap was addressed in the software industry by their open-source software movement. Here, we examine how the social and technical principles of the movement can guide the growth of an open-source clinical trial community.

  12. Open Source Software Licenses for Livermore National Laboratory

    SciTech Connect

    Busby, L.

    2000-08-10

    This paper attempts to develop supporting material in an effort to provide new options for licensing Laboratory-created software. Where employees and the Lab wish to release software codes as so-called ''Open Source'', they need, at a minimum, new licensing language for their released products. Several open source software licenses are reviewed to understand their common elements, and develop recommendations regarding new language.

  13. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  14. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  15. Open Source Intelligence "OSINT": Issues for Congress

    DTIC Science & Technology

    2008-01-28

    programs of the Soviet Union and towards the disparate threats posed by emerging post-Cold War threats. Collection strategies shifted from sophisticated...he stated, “Open source intelligence is the outer pieces of the jigsaw puzzle, without which one can neither begin nor complete the puzzle ... open...17 Some open source proponents view such information as constituting more than just the “the outer pieces of the jigsaw puzzle,” but rather every bit

  16. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  17. Open source electronic health records and chronic disease management

    PubMed Central

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-01-01

    Objective To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). Methods and Materials The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Results Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. Discussion The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. Conclusions The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC. PMID:23813566

  18. Open source electronic health records and chronic disease management.

    PubMed

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-02-01

    To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC.

  19. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  20. A molecular receptor targeted, hydroxyapatite nanocrystal based multi-modal contrast agent.

    PubMed

    Ashokan, Anusha; Menon, Deepthy; Nair, Shantikumar; Koyakutty, Manzoor

    2010-03-01

    Multi-modal molecular imaging can significantly improve the potential of non-invasive medical diagnosis by combining basic anatomical descriptions with in-depth phenotypic characteristics of disease. Contrast agents with multifunctional properties that can sense and enhance the signature of specific molecular markers, together with high biocompatibility are essential for combinatorial molecular imaging approaches. Here, we report a multi-modal contrast agent based on hydroxyapatite nanocrystals (nHAp), which is engineered to show simultaneous contrast enhancement for three major molecular imaging techniques such as magnetic resonance imaging (MRI), X-ray imaging and near-infrared (NIR) fluorescence imaging. Monodispersed nHAp crystals of average size approximately 30 nm and hexagonal crystal structure were in situ doped with multiple rare-earth impurities by a surfactant-free, aqueous wet-chemical method at 100 degrees C. Doping of nHAp with Eu(3+) (3 at%) resulted bright near-infrared fluorescence (700 nm) due to efficient (5)D(0)-(7)F(4) electronic transition and co-doping with Gd(3+) resulted enhanced paramagnetic longitudinal relaxivity (r(1) approximately 12 mM(-1) s(-1)) suitable for T(1) weighted MR imaging together with approximately 80% X-ray attenuation suitable for X-ray contrast imaging. Capability of MF-nHAp to specifically target and enhance the signature of molecular receptors (folate) in cancer cells was realized by carbodiimide grafting of cell-membrane receptor ligand folic acid (FA) on MF-nHAp surface aminized with dendrigraft polymer, polyethyleneimine (PEI). The FA-PEI-MF-nHAp conjugates showed specific aggregation on FR(+ve) cells while leaving the negative control cells untouched. Nanotoxicity evaluation of this multifunctional nHAp carried out on primary human endothelial cells (HUVEC), normal mouse lung fibroblast cell line (L929), human nasopharyngeal carcinoma (KB) and human lung cancer cell line (A549) revealed no apparent toxicity even

  1. Multi-modal sensor based weight drop spinal cord impact system for large animals.

    PubMed

    Kim, Hyeongbeom; Kim, Jong-Wan; Hyun, Jung-Keun; Park, Ilyong

    2017-08-23

    A conventional weight drop spinal cord (SC) impact system for large animals is composed of a high-speed video camera, a vision system, and other things. However, a camera with high speed at over 5,000 frames per second (FPS) is very expensive. In addition, the utilization of the vision system involves complex pattern recognition algorithms and accurate arrangement of the camera and the target. The purpose of this study was to develop a large animal spinal cord injury modeling system using a multi-modal sensor instead of a high-speed video camera and vision system. Another objective of this study was to demonstrate the possibility of the developed system to measure the impact parameters in the experiments using different stiffness materials and an in-vivo porcine SC. A multi-modal sensor based spinal cord injury impact system was developed for large animals. The experiments to measure SC impact parameters were then performed using three different stiffness materials and a Yucatan miniature pig to verify the performance of system developed. A comparative experiment was performed using three different stiffness materials such as high density (HD) sponge, rubber, and clay to demonstrate the system and perform measurement for impact parameters such as impact velocity, impulsive force, and maximally compressed displacement reflecting physical properties of materials. In the animal experiment, a female Yucatan miniature pig of 60 kg weight was used. Impact conditions for all experiments were fixed at freefalling object mass of 50 g and height of 20 cm. In the impact test, measured impact velocities were almost the same for the three different stiffness materials at 1.84 ± 0.0153 m/s. Impulsive forces for the three materials of rubber, HD sponge, and clay were 50.88 N, 32.35 N, and 6.68 N, respectively. Maximally compressed displacements for rubber, HD sponge, and clay were 1.93 mm, 3.35 mm, and 15.01 mm, respectively. In the pig experiment, impact velocity, impulsive

  2. Multi-modality registration via multi-scale textural and spectral embedding representations

    NASA Astrophysics Data System (ADS)

    Li, Lin; Rusu, Mirabela; Viswanath, Satish; Penzias, Gregory; Pahwa, Shivani; Gollamudi, Jay; Madabhushi, Anant

    2016-03-01

    Intensity-based similarity measures assume that the original signal intensity of different modality images can provide statistically consistent information regarding the two modalities to be co-registered. In multi-modal registration problems, however, intensity-based similarity measures are often inadequate to identify an optimal transformation. Texture features can improve the performance of the multi-modal co-registration by providing more similar appearance representations of the two images to be co-registered, compared to the signal intensity representations. Furthermore, texture features extracted at different length scales (neighborhood sizes) can reveal similar underlying structural attributes between the images to be co-registered similarities that may not be discernible on the signal intensity representation alone. However one limitation of using texture features is that a number of them may be redundant and dependent and hence there is a need to identify non-redundant representations. Additionally it is not clear which features at which specific scales reveal similar attributes across the images to be co-registered. To address this problem, we introduced a novel approach for multimodal co-registration that employs new multi-scale image representations. Our approach comprises 4 distinct steps: (1) texure feature extraction at each length scale within both the target and template images, (2) independent component analysis (ICA) at each texture feature length scale, and (3) spectrally embedding (SE) the ICA components (ICs) obtained for the texture features at each length scale, and finally (4) identifying and combining the optimal length scales at which to perform the co-registration. To combine and co-register across different length scales, -mutual information (-MI) was applied in the high dimensional space of spectral embedding vectors to facilitate co-registration. To validate our multi-scale co-registration approach, we aligned 45 pairs of prostate

  3. A multi-modal treatment approach for the shoulder: A 4 patient case series

    PubMed Central

    Pribicevic, Mario; Pollard, Henry

    2005-01-01

    Background This paper describes the clinical management of four cases of shoulder impingement syndrome using a conservative multimodal treatment approach. Clinical Features Four patients presented to a chiropractic clinic with chronic shoulder pain, tenderness in the shoulder region and a limited range of motion with pain and catching. After physical and orthopaedic examination a clinical diagnosis of shoulder impingement syndrome was reached. The four patients were admitted to a multi-modal treatment protocol including soft tissue therapy (ischaemic pressure and cross-friction massage), 7 minutes of phonophoresis (driving of medication into tissue with ultrasound) with 1% cortisone cream, diversified spinal and peripheral joint manipulation and rotator cuff and shoulder girdle muscle exercises. The outcome measures for the study were subjective/objective visual analogue pain scales (VAS), range of motion (goniometer) and return to normal daily, work and sporting activities. All four subjects at the end of the treatment protocol were symptom free with all outcome measures being normal. At 1 month follow up all patients continued to be symptom free with full range of motion and complete return to normal daily activities. Conclusion This case series demonstrates the potential benefit of a multimodal chiropractic protocol in resolving symptoms associated with a suspected clinical diagnosis of shoulder impingement syndrome. PMID:16168053

  4. Fusion of mass spectrometry and microscopy: a multi-modality paradigm for molecular tissue mapping

    PubMed Central

    Van de Plas, Raf; Yang, Junhai; Spraggins, Jeffrey; Caprioli, Richard M.

    2015-01-01

    A new predictive imaging modality is created through the ‘fusion’ of two distinct technologies: imaging mass spectrometry (IMS) and microscopy. IMS-generated molecular maps, rich in chemical information but having coarse spatial resolution, are combined with optical microscopy maps, which have relatively low chemical specificity but high spatial information. The resulting images combine the advantages of both technologies, enabling prediction of a molecular distribution both at high spatial resolution and with high chemical specificity. Multivariate regression is used to model variables in one technology, using variables from the other technology. Several applications demonstrate the remarkable potential of image fusion: (i) ‘sharpening’ of IMS images, which uses microscopy measurements to predict ion distributions at a spatial resolution that exceeds that of measured ion images by ten times or more; (ii) prediction of ion distributions in tissue areas that were not measured by IMS; and (iii) enrichment of biological signals and attenuation of instrumental artifacts, revealing insights that are not easily extracted from either microscopy or IMS separately. Image fusion enables a new multi-modality paradigm for tissue exploration whereby mining relationships between different imaging sensors yields novel imaging modalities that combine and surpass what can be gleaned from the individual technologies alone. PMID:25707028

  5. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  6. The integration of quantitative multi-modality imaging data into mathematical models of tumors

    NASA Astrophysics Data System (ADS)

    Atuegwu, Nkiruka C.; Gore, John C.; Yankeelov, Thomas E.

    2010-05-01

    Quantitative imaging data obtained from multiple modalities may be integrated into mathematical models of tumor growth and treatment response to achieve additional insights of practical predictive value. We show how this approach can describe the development of tumors that appear realistic in terms of producing proliferating tumor rims and necrotic cores. Two established models (the logistic model with and without the effects of treatment) and one novel model built a priori from available imaging data have been studied. We modify the logistic model to predict the spatial expansion of a tumor driven by tumor cell migration after a voxel's carrying capacity has been reached. Depending on the efficacy of a simulated cytoxic treatment, we show that the tumor may either continue to expand, or contract. The novel model includes hypoxia as a driver of tumor cell movement. The starting conditions for these models are based on imaging data related to the tumor cell number (as estimated from diffusion-weighted MRI), apoptosis (from 99mTc-Annexin-V SPECT), cell proliferation and hypoxia (from PET). We conclude that integrating multi-modality imaging data into mathematical models of tumor growth is a promising combination that can capture the salient features of tumor growth and treatment response and this indicates the direction for additional research.

  7. Holographic Raman tweezers controlled by multi-modal natural user interface

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltán; Keša, Peter; Nikorovič, Matej; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Valušová, Eva; Antalík, Marián; Zemánek, Pavel

    2016-01-01

    Holographic optical tweezers provide a contactless way to trap and manipulate several microobjects independently in space using focused laser beams. Although the methods of fast and efficient generation of optical traps are well developed, their user friendly control still lags behind. Even though several attempts have appeared recently to exploit touch tablets, 2D cameras, or Kinect game consoles, they have not yet reached the level of natural human interface. Here we demonstrate a multi-modal ‘natural user interface’ approach that combines finger and gaze tracking with gesture and speech recognition. This allows us to select objects with an operator’s gaze and voice, to trap the objects and control their positions via tracking of finger movement in space and to run semi-automatic procedures such as acquisition of Raman spectra from preselected objects. This approach takes advantage of the power of human processing of images together with smooth control of human fingertips and downscales these skills to control remotely the motion of microobjects at microscale in a natural way for the human operator.

  8. Multi-modal examination of psychological and interpersonal distinctions among MPI coping clusters: A preliminary study

    PubMed Central

    Junghaenel, Doerte U.; Keefe, Francis J.; Broderick, Joan E.

    2009-01-01

    The Multidimensional Pain Inventory (MPI) is a widely used instrument to characterize distinct psychosocial subgroups of patients with chronic pain: Adaptive (AC), Dysfunctional (DYS), and Interpersonally Distressed (ID). To date, several questions remain about the validity and distinctiveness of the patient clusters and continued scientific attention has strongly been recommended. It is unclear if AC patients experience better adjustment or merely present themselves favorably. Moreover, differences in psychological distress and interpersonal relations between DYS and ID patients are equivocal. The present study is the first to utilize comprehensive informant ratings to extend prior validity research on the MPI. We employed a multi-modal methodology consisting of patient self-report, parallel informant ratings, and behavioral measures. Ninety-nine patients with chronic pain, their partners, and providers participated. They completed measures of patients’ psychological distress and social relations. We also systematically observed patients’ pain behavior. Results provided strong support for the validity of the AC cluster in that patients’ positive adaptation was reliably corroborated by informants. The differentiating characteristics between the two maladaptive clusters, however, remain elusive. We found evidence that DYS patients’ distress appeared to be illness-specific rather than generalized; however, both clusters were equally associated with social distress and partner caregiver burden. PMID:19783221

  9. Nano-sensitizers for multi-modality optical diagnostic imaging and therapy of cancer

    NASA Astrophysics Data System (ADS)

    Olivo, Malini; Lucky, Sasidharan S.; Bhuvaneswari, Ramaswamy; Dendukuri, Nagamani

    2011-07-01

    We report novel bioconjugated nanosensitizers as optical and therapeutic probes for the detection, monitoring and treatment of cancer. These nanosensitisers, consisting of hypericin loaded bioconjugated gold nanoparticles, can act as tumor cell specific therapeutic photosensitizers for photodynamic therapy coupled with additional photothermal effects rendered by plasmonic heating effects of gold nanoparticles. In addition to the therapeutic effects, the nanosensitizer can be developed as optical probes for state-of-the-art multi-modality in-vivo optical imaging technology such as in-vivo 3D confocal fluorescence endomicroscopic imaging, optical coherence tomography (OCT) with improved optical contrast using nano-gold and Surface Enhanced Raman Scattering (SERS) based imaging and bio-sensing. These techniques can be used in tandem or independently as in-vivo optical biopsy techniques to specifically detect and monitor specific cancer cells in-vivo. Such novel nanosensitizer based optical biopsy imaging technique has the potential to provide an alternative to tissue biopsy and will enable clinicians to make real-time diagnosis, determine surgical margins during operative procedures and perform targeted treatment of cancers.

  10. MINC 2.0: A Flexible Format for Multi-Modal Images

    PubMed Central

    Vincent, Robert D.; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L.; Fonov, Vladimir S.; Robbins, Steven M.; Baghdadi, Leila; Lerch, Jason; Sled, John G.; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P.; Collins, D. Louis; Evans, Alan C.

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities. PMID:27563289

  11. Stability-Weighted Matrix Completion of Incomplete Multi-modal Data for Disease Diagnosis

    PubMed Central

    Thung, Kim-Han; Adeli, Ehsan; Yap, Pew-Thian

    2016-01-01

    Effective utilization of heterogeneous multi-modal data for Alzheimer’s Disease (AD) diagnosis and prognosis has always been hampered by incomplete data. One method to deal with this is low-rank matrix completion (LRMC), which simultaneous imputes missing data features and target values of interest. Although LRMC yields reasonable results, it implicitly weights features from all the modalities equally, ignoring the differences in discriminative power of features from different modalities. In this paper, we propose stability-weighted LRMC (swLRMC), an LRMC improvement that weights features and modalities according to their importance and reliability. We introduce a method, called stability weighting, to utilize subsampling techniques and outcomes from a range of hyper-parameters of sparse feature learning to obtain a stable set of weights. Incorporating these weights into LRMC, swLRMC can better account for differences in features and modalities for improving diagnosis. Experimental results confirm that the proposed method outperforms the conventional LRMC, feature-selection based LRMC, and other state-of-the-art methods. PMID:28286884

  12. Multi-scale and Multi-modal Analysis of Metamorphic Rocks Coupling Fluorescence and TXM Techniques

    NASA Astrophysics Data System (ADS)

    De Andrade, V. J. D.; Gursoy, D.; Wojcik, M.; DeCarlo, F.; Ganne, J.; Dubacq, B.

    2014-12-01

    Rocks are commonly polycrystalline systems presenting multi-scale chemical and structural heterogeneities inherited from crystallization processes or successive metamorphic events. Through different applications on metamorphic rocks involving fluorescence microprobes and full-field spectroscopy, one will illustrate how spatially resolved analytical techniques allow rock compositional variations to be related to large-scale geodynamic processes. Those examples also stress the importance of multi-modality instruments with zoom-in capability to study samples from mm to several μm large fields of view, with micrometer down to sub-100 nanometer spatial resolutions. In this perspective, imaging capabilities offered by the new ultra-bright diffraction limited synchrotron sources will be described based on experimental data. At last, the new hard X-ray Transmission X-ray Microscope (TXM) at Sector 32 of the APS at Argonne National Laboratory, performing nano computed tomography with in situ capabilities will be presented. The instrument benefit from several R&D key activities like the fabrication of new zone plates in the framework of the Multi-Bend Achromat Lattice (MBA) upgrade at APS, or the development of powerful tomography reconstruction algorithms able to operate with a limited number of projections.

  13. Hybrid parameter identification of a multi-modal underwater soft robot.

    PubMed

    Giorgio-Serchi, F; Arienti, A; Corucci, F; Giorelli, M; Laschi, C

    2017-02-28

    We introduce an octopus-inspired, underwater, soft-bodied robot capable of performing waterborne pulsed-jet propulsion and benthic legged-locomotion. This vehicle consists for as much as 80% of its volume of rubber-like materials so that structural flexibility is exploited as a key element during both modes of locomotion. The high bodily softness, the unconventional morphology and the non-stationary nature of its propulsion mechanisms require dynamic characterization of this robot to be dealt with by ad hoc techniques. We perform parameter identification by resorting to a hybrid optimization approach where the characterization of the dual ambulatory strategies of the robot is performed in a segregated fashion. A least squares-based method coupled with a genetic algorithm-based method is employed for the swimming and the crawling phases, respectively. The outcomes bring evidence that compartmentalized parameter identification represents a viable protocol for multi-modal vehicles characterization. However, the use of static thrust recordings as the input signal in the dynamic determination of shape-changing self-propelled vehicles is responsible for the critical underestimation of the quadratic drag coefficient.

  14. Development and implementation of an integrated, multi-modality, user-centered interactive dietary change program

    PubMed Central

    Glasgow, Russell E.; Christiansen, Steve; Smith, K. Sabina; Stevens, Victor J.; Toobert, Deborah J.

    2009-01-01

    Computer-tailored behavior change programs offer the potential for reaching large populations at a much lower cost than individual or group-based programs. However, few of these programs to date appear to integrate behavioral theory with user choice, or combine different electronic modalities. We describe the development of an integrated CD-ROM and interactive voice response dietary change intervention that combines behavioral problem-solving theory with a high degree of user choice. The program, WISE CHOICES, is being evaluated as part of an ongoing trial. This paper describes the program development, emphasizing how user preferences are accommodated, and presents implementation and user satisfaction data. The program was successfully implemented; the linkages among the central database, the CD-ROM and the automated telephone components were robust, and participants liked the program almost as well as a counselor-delivered dietary change condition. Multi-modality programs that emphasize the strengths of each approach appear to be feasible. Future research is needed to determine the program impact and cost-effectiveness compared with counselor-delivered intervention. PMID:18711204

  15. Interactive Feature Space Explorer© for multi-modal magnetic resonance imaging.

    PubMed

    Özcan, Alpay; Türkbey, Barış; Choyke, Peter L; Akin, Oguz; Aras, Ömer; Mun, Seong K

    2015-07-01

    Wider information content of multi-modal biomedical imaging is advantageous for detection, diagnosis and prognosis of various pathologies. However, the necessity to evaluate a large number images might hinder these advantages and reduce the efficiency. Herein, a new computer aided approach based on the utilization of feature space (FS) with reduced reliance on multiple image evaluations is proposed for research and routine clinical use. The method introduces the physician experience into the discovery process of FS biomarkers for addressing biological complexity, e.g., disease heterogeneity. This, in turn, elucidates relevant biophysical information which would not be available when automated algorithms are utilized. Accordingly, the prototype platform was designed and built for interactively investigating the features and their corresponding anatomic loci in order to identify pathologic FS regions. While the platform might be potentially beneficial in decision support generally and specifically for evaluating outlier cases, it is also potentially suitable for accurate ground truth determination in FS for algorithm development. Initial assessments conducted on two different pathologies from two different institutions provided valuable biophysical perspective. Investigations of the prostate magnetic resonance imaging data resulted in locating a potential aggressiveness biomarker in prostate cancer. Preliminary findings on renal cell carcinoma imaging data demonstrated potential for characterization of disease subtypes in the FS. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Registration strategies for multi-modal whole-body MRI mosaicing.

    PubMed

    Ceranka, Jakub; Polfliet, Mathias; Lecouvet, Frédéric; Michoux, Nicolas; de Mey, Johan; Vandemeulebroucke, Jef

    2017-06-21

    To test and compare different registration approaches for performing whole-body diffusion-weighted (wbDWI) image station mosaicing, and its alignment to corresponding anatomical T1 whole-body image. Four different registration strategies aiming at mosaicing of diffusion-weighted image stations, and their alignment to the corresponding whole-body anatomical image, were proposed and evaluated. These included two-step approaches, where diffusion-weighted stations are first combined in a pairwise (Strategy 1) or groupwise (Strategy 2) manner and later non-rigidly aligned to the anatomical image; a direct pairwise mapping of DWI stations onto the anatomical image (Strategy 3); and simultaneous mosaicing of DWI and alignment to the anatomical image (Strategy 4). Additionally, different images driving the registration were investigated. Experiments were performed for 20 whole-body images of patients with bone metastases. Strategies 1 and 2 showed significant improvement in mosaicing accuracy with respect to the non-registered images (P < 0.006). Strategy 2 based on ADC images increased the alignment accuracy between DWI stations and the T1 whole-body image (P = 0.0009). A two-step registration strategy, relying on groupwise mosaicing of the ADC stations and subsequent registration to T1 , provided the best compromise between whole-body DWI image quality and multi-modal alignment. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Multi-modal, ultrasensitive detection of trace explosives using MEMS devices with quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Zandieh, Omid; Kim, Seonghwan

    2016-05-01

    Multi-modal chemical sensors based on microelectromechanical systems (MEMS) have been developed with an electrical readout. Opto-calorimetric infrared (IR) spectroscopy, capable of obtaining molecular signatures of extremely small quantities of adsorbed explosive molecules, has been realized with a microthermometer/microheater device using a widely tunable quantum cascade laser. A microthermometer/microheater device responds to the heat generated by nonradiative decay process when the adsorbed explosive molecules are resonantly excited with IR light. Monitoring the variation in microthermometer signal as a function of illuminating IR wavelength corresponds to the conventional IR absorption spectrum of the adsorbed molecules. Moreover, the mass of the adsorbed molecules is determined by measuring the resonance frequency shift of the cantilever shape microthermometer for the quantitative opto-calorimetric IR spectroscopy. In addition, micro-differential thermal analysis, which can be used to differentiate exothermic or endothermic reaction of heated molecules, has been performed with the same device to provide additional orthogonal signal for trace explosive detection and sensor surface regeneration. In summary, we have designed, fabricated and tested microcantilever shape devices integrated with a microthermometer/microheater which can provide electrical responses used to acquire both opto-calorimetric IR spectra and microcalorimetric thermal responses. We have demonstrated the successful detection, differentiation, and quantification of trace amounts of explosive molecules and their mixtures (cyclotrimethylene trinitramine (RDX) and pentaerythritol tetranitrate (PETN)) using three orthogonal sensing signals which improve chemical selectivity.

  18. Determining Pain Detection and Tolerance Thresholds Using an Integrated, Multi-Modal Pain Task Battery

    PubMed Central

    Hay, Justin L.; Okkerse, Pieter; van Amerongen, Guido; Groeneveld, Geert Jan

    2016-01-01

    Human pain models are useful in the assessing the analgesic effect of drugs, providing information about a drug's pharmacology and identify potentially suitable therapeutic populations. The need to use a comprehensive battery of pain models is highlighted by studies whereby only a single pain model, thought to relate to the clinical situation, demonstrates lack of efficacy. No single experimental model can mimic the complex nature of clinical pain. The integrated, multi-modal pain task battery presented here encompasses the electrical stimulation task, pressure stimulation task, cold pressor task, the UVB inflammatory model which includes a thermal task and a paradigm for inhibitory conditioned pain modulation. These human pain models have been tested for predicative validity and reliability both in their own right and in combination, and can be used repeatedly, quickly, in short succession, with minimum burden for the subject and with a modest quantity of equipment. This allows a drug to be fully characterized and profiled for analgesic effect which is especially useful for drugs with a novel or untested mechanism of action. PMID:27166581

  19. Multi-focus and multi-modal fusion: a study of multi-resolution transforms

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Thomas, Millicent

    2016-05-01

    Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.

  20. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  1. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    NASA Astrophysics Data System (ADS)

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-01

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R&D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  2. Imaging results of multi-modal ultrasound computerized tomography system designed for breast diagnosis.

    PubMed

    Opieliński, Krzysztof J; Pruchnicki, Piotr; Gudra, Tadeusz; Podgórski, Przemysław; Kurcz, Jacek; Kraśnicki, Tomasz; Sąsiadek, Marek; Majewski, Jarosław

    2015-12-01

    Nowadays, in the era of common computerization, transmission and reflection methods are intensively developed in addition to improving classical ultrasound methods (US) for imaging of tissue structure, in particular ultrasound transmission tomography UTT (analogous to computed tomography CT which uses X-rays) and reflection tomography URT (based on the synthetic aperture method used in radar imaging techniques). This paper presents and analyses the results of ultrasound transmission tomography imaging of the internal structure of the female breast biopsy phantom CIRS Model 052A and the results of the ultrasound reflection tomography imaging of a wire sample. Imaging was performed using a multi-modal ultrasound computerized tomography system developed with the participation of a private investor. The results were compared with the results of imaging obtained using dual energy CT, MR mammography and conventional US method. The obtained results indicate that the developed UTT and URT methods, after the acceleration of the scanning process, thus enabling in vivo examination, may be successfully used for detection and detailed characterization of breast lesions in women. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Multi-modal Patient Cohort Identification from EEG Report and Signal Data.

    PubMed

    Goodwin, Travis R; Harabagiu, Sanda M

    2016-01-01

    Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%.

  4. Multi-structure segmentation of multi-modal brain images using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Eun Young; Johnson, Hans

    2010-03-01

    A method for simultaneous segmentation of multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.

  5. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI.

    PubMed

    Zhuang, Xiahai; Shen, Juan

    2016-07-01

    A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min.

  6. Multi-modal Patient Cohort Identification from EEG Report and Signal Data

    PubMed Central

    Goodwin, Travis R.; Harabagiu, Sanda M.

    2016-01-01

    Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%. PMID:28269938

  7. Multi-Modal Dictionary Learning for Image Separation With Application in Art Investigation

    NASA Astrophysics Data System (ADS)

    Deligiannis, Nikos; Mota, Joao F. C.; Cornelis, Bruno; Rodrigues, Miguel R. D.; Daubechies, Ingrid

    2017-02-01

    In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the front and back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component models features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data - taken from digital acquisition of the Ghent Altarpiece (1432) - confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.

  8. Determining Pain Detection and Tolerance Thresholds Using an Integrated, Multi-Modal Pain Task Battery.

    PubMed

    Hay, Justin L; Okkerse, Pieter; van Amerongen, Guido; Groeneveld, Geert Jan

    2016-04-14

    Human pain models are useful in the assessing the analgesic effect of drugs, providing information about a drug's pharmacology and identify potentially suitable therapeutic populations. The need to use a comprehensive battery of pain models is highlighted by studies whereby only a single pain model, thought to relate to the clinical situation, demonstrates lack of efficacy. No single experimental model can mimic the complex nature of clinical pain. The integrated, multi-modal pain task battery presented here encompasses the electrical stimulation task, pressure stimulation task, cold pressor task, the UVB inflammatory model which includes a thermal task and a paradigm for inhibitory conditioned pain modulation. These human pain models have been tested for predicative validity and reliability both in their own right and in combination, and can be used repeatedly, quickly, in short succession, with minimum burden for the subject and with a modest quantity of equipment. This allows a drug to be fully characterized and profiled for analgesic effect which is especially useful for drugs with a novel or untested mechanism of action.

  9. Multi-modal target detection for autonomous wide area search and surveillance

    NASA Astrophysics Data System (ADS)

    Breckon, Toby P.; Gaszczak, Anna; Han, Jiwan; Eichner, Marcin L.; Barnes, Stuart E.

    2013-10-01

    Generalised wide are search and surveillance is a common-place tasking for multi-sensory equipped autonomous systems. Here we present on a key supporting topic to this task - the automatic interpretation, fusion and detected target reporting from multi-modal sensor information received from multiple autonomous platforms deployed for wide-area environment search. We detail the realization of a real-time methodology for the automated detection of people and vehicles using combined visible-band (EO), thermal-band (IR) and radar sensing from a deployed network of multiple autonomous platforms (ground and aerial). This facilities real-time target detection, reported with varying levels of confidence, using information from both multiple sensors and multiple sensor platforms to provide environment-wide situational awareness. A range of automatic classification approaches are proposed, driven by underlying machine learning techniques, that facilitate the automatic detection of either target type with cross-modal target confirmation. Extended results are presented that show both the detection of people and vehicles under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance evaluation is presented at an episodic level with individual classifiers optimized for maximal each object of interest (vehicle/person) detection over a given search path/pattern of the environment, across all sensors and modalities, rather than on a per sensor sample basis. Episodic target detection, evaluated over a number of wide-area environment search and reporting tasks, generally exceeds 90%+ for the targets considered here.

  10. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  11. Anticipation by multi-modal association through an artificial mental imagery process

    NASA Astrophysics Data System (ADS)

    Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno

    2015-01-01

    Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.

  12. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  13. Performance processes within affect-related performance zones: a multi-modal investigation of golf performance.

    PubMed

    van der Lei, Harry; Tenenbaum, Gershon

    2012-12-01

    Individual affect-related performance zones (IAPZs) method utilizing Kamata et al. (J Sport Exerc Psychol 24:189-208, 2002) probabilistic model of determining the individual zone of optimal functioning was utilized as idiosyncratic affective patterns during golf performance. To do so, three male golfers of a varsity golf team were observed during three rounds of golf competition. The investigation implemented a multi-modal assessment approach in which the probabilistic relationship between affective states and both, performance process and performance outcome, measures were determined. More specifically, introspective (i.e., verbal reports) and objective (heart rate and respiration rate) measures of arousal were incorporated to examine the relationships between arousal states and both, process components (i.e., routine consistency, timing), and outcome scores related to golf performance. Results revealed distinguishable and idiosyncratic IAPZs associated with physiological and introspective measures for each golfer. The associations between the IAPZs and decision-making or swing/stroke execution were strong and unique for each golfer. Results are elaborated using cognitive and affect-related concepts, and applications for practitioners are provided.

  14. Multi-Modal Neuroimaging Feature Learning for Multi-Class Diagnosis of Alzheimer’s Disease

    PubMed Central

    Liu, Siqi; Liu, Sidong; Cai, Weidong; Che, Hangyu; Pujol, Sonia; Kikinis, Ron; Feng, Dagan; Fulham, Michael J.

    2015-01-01

    The accurate diagnosis of Alzheimers disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer aided diagnosis (CAD) of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multi-modal neuroimaging features in one setting and has the potential to require less labelled data. A performance gain was achieved in both binary classification and multi-class classification of AD. The advantages and limitations of the proposed framework are discussed. PMID:25423647

  15. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    SciTech Connect

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-25

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  16. a Framework for AN Open Source Geospatial Certification Model

    NASA Astrophysics Data System (ADS)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  17. The Imagery Exchange (TIE): Open Source Imagery Management System

    NASA Astrophysics Data System (ADS)

    Alarcon, C.; Huang, T.; Thompson, C. K.; Roberts, J. T.; Hall, J. R.; Cechini, M.; Schmaltz, J. E.; McGann, J. M.; Boller, R. A.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    The NASA's Global Imagery Browse Service (GIBS) is the Earth Observation System (EOS) imagery solution for delivering global, full-resolution satellite imagery in a highly responsive manner. GIBS consists of two major subsystems, OnEarth and The Imagery Exchange (TIE). TIE is the GIBS horizontally scaled imagery workflow manager component, an Open Archival Information System (OAIS) responsible for orchestrating the acquisition, preparation, generation, and archiving of imagery to be served by OnEarth. TIE is an extension of the Data Management and Archive System (DMAS), a high performance data management system developed at the Jet Propulsion Laboratory by leveraging open source tools and frameworks, which includes Groovy/Grails, Restlet, Apache ZooKeeper, Apache Solr, and other open source solutions. This presentation focuses on the application of Open Source technologies in developing a horizontally scaled data system like DMAS and TIE. As part of our commitment in contributing back to the open source community, TIE is in the process of being open sourced. This presentation will also cover our current effort in getting TIE in to the hands of the community from which we benefited from.

  18. Comparison of open-source linear programming solvers.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  19. Your Personal Analysis Toolkit - An Open Source Solution

    NASA Astrophysics Data System (ADS)

    Mitchell, T.

    2009-12-01

    Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!

  20. Technology collaboration by means of an open source government

    NASA Astrophysics Data System (ADS)

    Berardi, Steven M.

    2009-05-01

    The idea of open source software originally began in the early 1980s, but it never gained widespread support until recently, largely due to the explosive growth of the Internet. Only the Internet has made this kind of concept possible, bringing together millions of software developers from around the world to pool their knowledge. The tremendous success of open source software has prompted many corporations to adopt the culture of open source and thus share information they previously held secret. The government, and specifically the Department of Defense (DoD), could also benefit from adopting an open source culture. In acquiring satellite systems, the DoD often builds walls between program offices, but installing doors between programs can promote collaboration and information sharing. This paper addresses the challenges and consequences of adopting an open source culture to facilitate technology collaboration for DoD space acquisitions. DISCLAIMER: The views presented here are the views of the author, and do not represent the views of the United States Government, United States Air Force, or the Missile Defense Agency.

  1. Microarray Meta-Analysis and Cross-Platform Normalization: Integrative Genomics for Robust Biomarker Discovery

    PubMed Central

    Walsh, Christopher J.; Hu, Pingzhao; Batt, Jane; Dos Santos, Claudia C.

    2015-01-01

    The diagnostic and prognostic potential of the vast quantity of publicly-available microarray data has driven the development of methods for integrating the data from different microarray platforms. Cross-platform integration, when appropriately implemented, has been shown to improve reproducibility and robustness of gene signature biomarkers. Microarray platform integration can be conceptually divided into approaches that perform early stage integration (cross-platform normalization) versus late stage data integration (meta-analysis). A growing number of statistical methods and associated software for platform integration are available to the user, however an understanding of their comparative performance and potential pitfalls is critical for best implementation. In this review we provide evidence-based, practical guidance to researchers performing cross-platform integration, particularly with an objective to discover biomarkers. PMID:27600230

  2. Microarray Meta-Analysis and Cross-Platform Normalization: Integrative Genomics for Robust Biomarker Discovery.

    PubMed

    Walsh, Christopher J; Hu, Pingzhao; Batt, Jane; Santos, Claudia C Dos

    2015-08-21

    The diagnostic and prognostic potential of the vast quantity of publicly-available microarray data has driven the development of methods for integrating the data from different microarray platforms. Cross-platform integration, when appropriately implemented, has been shown to improve reproducibility and robustness of gene signature biomarkers. Microarray platform integration can be conceptually divided into approaches that perform early stage integration (cross-platform normalization) versus late stage data integration (meta-analysis). A growing number of statistical methods and associated software for platform integration are available to the user, however an understanding of their comparative performance and potential pitfalls is critical for best implementation. In this review we provide evidence-based, practical guidance to researchers performing cross-platform integration, particularly with an objective to discover biomarkers.

  3. A Framework for the Systematic Collection of Open Source Intelligence

    SciTech Connect

    Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  4. Trends and challenges in open source software (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Aylward, Stephen

    2013-10-01

    Over the past decade, the field of medical image analysis research has undergone a rapid evolution. It was a collection of disconnected efforts that were burdened by mundane coding and file I/O tasks. It is now a collaborative community that has embraced open-source software as a shared foundation, reducing mundane coding and I/O burdens, promoting replicable research, and accelerating the pace of research and product development. This talk will review the history and current state of open-source software in medical image analysis research, will discuss the role of intellectual property in research, and will present emerging trends and technologies relevant to the growing importance of open-source software.

  5. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    NASA Astrophysics Data System (ADS)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  6. Human genome and open source: balancing ethics and business.

    PubMed

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives.

  7. Freeing Crop Genetics through the Open Source Seed Initiative

    PubMed Central

    Luby, Claire H.; Goldman, Irwin L.

    2016-01-01

    For millennia, seeds have been freely available to use for farming and plant breeding without restriction. Within the past century, however, intellectual property rights (IPRs) have threatened this tradition. In response, a movement has emerged to counter the trend toward increasing consolidation of control and ownership of plant germplasm. One effort, the Open Source Seed Initiative (OSSI, www.osseeds.org), aims to ensure access to crop genetic resources by embracing an open source mechanism that fosters exchange and innovation among farmers, plant breeders, and seed companies. Plant breeders across many sectors have taken the OSSI Pledge to create a protected commons of plant germplasm for future generations. PMID:27093567

  8. Open source and DIY hardware for DNA nanotechnology labs

    PubMed Central

    Damase, Tulsi R.; Stephens, Daniel; Spencer, Adam; Allen, Peter B.

    2015-01-01

    A set of instruments and specialized equipment is necessary to equip a laboratory to work with DNA. Reducing the barrier to entry for DNA manipulation should enable and encourage new labs to enter the field. We present three examples of open source/DIY technology with significantly reduced costs relative to commercial equipment. This includes a gel scanner, a horizontal PAGE gel mold, and a homogenizer for generating DNA-coated particles. The overall cost savings obtained by using open source/DIY equipment was between 50 and 90%. PMID:26457320

  9. Open-source software for radiologists: a primer.

    PubMed

    Scarsbrook, A F

    2007-02-01

    There is a wide variety of free (open-source) software available via the Internet which may be of interest to radiologists. This article will explore the use of open-source software in radiology to help streamline academic workflow and improve general efficiency and effectiveness by highlighting a number of the most useful applications currently available. These include really simple syndication applications, e-mail management, spreadsheet, word processing, database and presentation packages, as well as image and video editing software. How to incorporate this software into radiological practice will also be discussed.

  10. Open source and DIY hardware for DNA nanotechnology labs.

    PubMed

    Damase, Tulsi R; Stephens, Daniel; Spencer, Adam; Allen, Peter B

    A set of instruments and specialized equipment is necessary to equip a laboratory to work with DNA. Reducing the barrier to entry for DNA manipulation should enable and encourage new labs to enter the field. We present three examples of open source/DIY technology with significantly reduced costs relative to commercial equipment. This includes a gel scanner, a horizontal PAGE gel mold, and a homogenizer for generating DNA-coated particles. The overall cost savings obtained by using open source/DIY equipment was between 50 and 90%.

  11. Comparing uni-modal and multi-modal therapies for improving writing in acquired dysgraphia after stroke.

    PubMed

    Thiel, Lindsey; Sage, Karen; Conroy, Paul

    2016-01-01

    Writing therapy studies have been predominantly uni-modal in nature; i.e., their central therapy task has typically been either writing to dictation or copying and recalling words. There has not yet been a study that has compared the effects of a uni-modal to a multi-modal writing therapy in terms of improvements to spelling accuracy. A multiple-case study with eight participants aimed to compare the effects of a uni-modal and a multi-modal therapy on the spelling accuracy of treated and untreated target words at immediate and follow-up assessment points. A cross-over design was used and within each therapy a matched set of words was targeted. These words and a matched control set were assessed before as well as immediately after each therapy and six weeks following therapy. The two approaches did not differ in their effects on spelling accuracy of treated or untreated items or degree of maintenance. All participants made significant improvements on treated and control items; however, not all improvements were maintained at follow-up. The findings suggested that multi-modal therapy did not have an advantage over uni-modal therapy for the participants in this study. Performance differences were instead driven by participant variables.

  12. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    NASA Astrophysics Data System (ADS)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  13. Female preference for multi-modal courtship: multiple signals are important for male mating success in peacock spiders

    PubMed Central

    Girard, Madeline B.; Elias, Damian O.; Kasumovic, Michael M.

    2015-01-01

    A long-standing goal for biologists has been to understand how female preferences operate in systems where males have evolved numerous sexually selected traits. Jumping spiders of the Maratus genus are exceptionally sexually dimorphic in appearance and signalling behaviour. Presumably, strong sexual selection by females has played an important role in the evolution of complex signals displayed by males of this group; however, this has not yet been demonstrated. In fact, despite apparent widespread examples of sexual selection in nature, empirical evidence is relatively sparse, especially for species employing multiple modalities for intersexual communication. In order to elucidate whether female preference can explain the evolution of multi-modal signalling traits, we ran a series of mating trials using Maratus volans. We used video recordings and laser vibrometry to characterize, quantify and examine which male courtship traits predict various metrics of mating success. We found evidence for strong sexual selection on males in this system, with success contingent upon a combination of visual and vibratory displays. Additionally, independently produced, yet correlated suites of multi-modal male signals are linked to other aspects of female peacock spider behaviour. Lastly, our data provide some support for both the redundant signal and multiple messages hypotheses for the evolution of multi-modal signalling. PMID:26631566

  14. Female preference for multi-modal courtship: multiple signals are important for male mating success in peacock spiders.

    PubMed

    Girard, Madeline B; Elias, Damian O; Kasumovic, Michael M

    2015-12-07

    A long-standing goal for biologists has been to understand how female preferences operate in systems where males have evolved numerous sexually selected traits. Jumping spiders of the Maratus genus are exceptionally sexually dimorphic in appearance and signalling behaviour. Presumably, strong sexual selection by females has played an important role in the evolution of complex signals displayed by males of this group; however, this has not yet been demonstrated. In fact, despite apparent widespread examples of sexual selection in nature, empirical evidence is relatively sparse, especially for species employing multiple modalities for intersexual communication. In order to elucidate whether female preference can explain the evolution of multi-modal signalling traits, we ran a series of mating trials using Maratus volans. We used video recordings and laser vibrometry to characterize, quantify and examine which male courtship traits predict various metrics of mating success. We found evidence for strong sexual selection on males in this system, with success contingent upon a combination of visual and vibratory displays. Additionally, independently produced, yet correlated suites of multi-modal male signals are linked to other aspects of female peacock spider behaviour. Lastly, our data provide some support for both the redundant signal and multiple messages hypotheses for the evolution of multi-modal signalling. © 2015 The Author(s).

  15. Imaging Neurodegeneration: Steps Toward Brain Network-Based Pathophysiology and Its Potential for Multi-modal Imaging Diagnostics.

    PubMed

    Sorg, C; Göttler, J; Zimmer, C

    2015-10-01

    Multi-modal brain imaging provides different in vivo windows into the human brain and thereby different ways to characterize brain disorders. Particularly, resting-state functional magnetic resonance imaging facilitates the study of macroscopic intrinsic brain networks, which are critical for development and spread of neurodegenerative processes in different neurodegenerative diseases. The aim of the current study is to present and highlight some paradigmatic findings in intrinsic network-based pathophysiology of neurodegenerative diseases and its potential for new network-based multimodal tools in imaging diagnostics. Qualitative review of selected multi-modal imaging studies in neurodegenerative diseases particularly in Alzheimer's disease (AD). Functional connectivity of intrinsic brain networks is selectively and progressively impaired in AD, with changes likely starting before the onset of symptoms in fronto-parietal key networks such as default mode or attention networks. Patterns of distribution and development of both amyloid-β plaques and atrophy are linked with network connectivity changes, suggesting that start and spread of pathology interacts with network connectivity. Qualitatively similar findings have been observed in other neurodegenerative disorders, suggesting shared mechanisms of network-based pathophysiology across diseases. Spread of neurodegeneration is intimately linked with the functional connectivity of intrinsic brain networks. These pathophysiological insights pave the way for new multi-modal network-based tools to detect and characterize neurodegeneration in individual patients.

  16. Assessment of a multi-modal intervention for the prevention of catheter-associated urinary tract infections.

    PubMed

    Ternavasio-de la Vega, H G; Barbosa Ventura, A; Castaño-Romero, F; Sauchelli, F D; Prolo Acosta, A; Rodríguez Alcázar, F J; Vicente Sánchez, A; Ruiz Antúnez, E; Marcos, M; Laso, J

    2016-10-01

    Catheter-associated urinary tract infections (CAUTIs) represent an important healthcare burden. To assess the effectiveness of an evidence-based multi-modal, multi-disciplinary intervention intended to improve outcomes by reducing the use of urinary catheters (UCs) and minimizing the incidence of CAUTIs in the internal medicine department of a university hospital. A multi-modal intervention was developed, including training sessions, urinary catheterization reminders, surveillance systems, and mechanisms for staff feedback of results. The frequency of UC use and incidence of CAUTIs were recorded in three-month periods before (P1) and during the intervention (P2). The catheterization rate decreased significantly during P2 [27.8% vs 16.9%; relative risk (RR): 0.61; 95% confidence interval (CI): 0.57-0.65]. We also observed a reduction in CAUTI risk (18.3 vs 9.8%; RR: 0.53; 95% CI: 0.30-0.93), a reduction in the CAUTI rate per 1000 patient-days [5.5 vs 2.8; incidence ratio (IR): 0.52; 95% CI: 0.28-0.94], and a non-significant decrease in the CAUTI rate per 1000 catheter-days (19.3 vs 16.9; IR: 0.85; 95% CI: 0.46-1.55). The multi-modal intervention was effective in reducing the catheterization rate and the frequency of CAUTIs. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  17. Classification of first-episode psychosis: a multi-modal multi-feature approach integrating structural and diffusion imaging.

    PubMed

    Peruzzo, Denis; Castellani, Umberto; Perlini, Cinzia; Bellani, Marcella; Marinelli, Veronica; Rambaldelli, Gianluca; Lasalvia, Antonio; Tosato, Sarah; De Santi, Katia; Murino, Vittorio; Ruggeri, Mirella; Brambilla, Paolo

    2015-06-01

    Currently, most of the classification studies of psychosis focused on chronic patients and employed single machine learning approaches. To overcome these limitations, we here compare, to our best knowledge for the first time, different classification methods of first-episode psychosis (FEP) using multi-modal imaging data exploited on several cortical and subcortical structures and white matter fiber bundles. 23 FEP patients and 23 age-, gender-, and race-matched healthy participants were included in the study. An innovative multivariate approach based on multiple kernel learning (MKL) methods was implemented on structural MRI and diffusion tensor imaging. MKL provides the best classification performances in comparison with the more widely used support vector machine, enabling the definition of a reliable automatic decisional system based on the integration of multi-modal imaging information. Our results show a discrimination accuracy greater than 90 % between healthy subjects and patients with FEP. Regions with an accuracy greater than 70 % on different imaging sources and measures were middle and superior frontal gyrus, parahippocampal gyrus, uncinate fascicles, and cingulum. This study shows that multivariate machine learning approaches integrating multi-modal and multisource imaging data can classify FEP patients with high accuracy. Interestingly, specific grey matter structures and white matter bundles reach high classification reliability when using different imaging modalities and indices, potentially outlining a prefronto-limbic network impaired in FEP with particular regard to the right hemisphere.

  18. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data.

    PubMed

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter; Kjaer, Troels W; Sams, Thomas; Sorensen, Helge B D

    2012-08-01

    The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi-modal detection system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. The sweet spot: FDG and other 2-carbon glucose analogs for multi-modal metabolic imaging of tumor metabolism.

    PubMed

    Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W

    2015-01-01

    Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with (18)F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents.

  20. A connectivity-based test-retest dataset of multi-modal magnetic resonance imaging in young healthy adults.

    PubMed

    Lin, Qixiang; Dai, Zhengjia; Xia, Mingrui; Han, Zaizhu; Huang, Ruiwang; Gong, Gaolang; Liu, Chao; Bi, Yanchao; He, Yong

    2015-01-01

    Recently, magnetic resonance imaging (MRI) has been widely used to investigate the structures and functions of the human brain in health and disease in vivo. However, there are growing concerns about the test-retest reliability of structural and functional measurements derived from MRI data. Here, we present a test-retest dataset of multi-modal MRI including structural MRI (S-MRI), diffusion MRI (D-MRI) and resting-state functional MRI (R-fMRI). Fifty-seven healthy young adults (age range: 19-30 years) were recruited and completed two multi-modal MRI scan sessions at an interval of approximately 6 weeks. Each scan session included R-fMRI, S-MRI and D-MRI data. Additionally, there were two separated R-fMRI scans at the beginning and at the end of the first session (approximately 20 min apart). This multi-modal MRI dataset not only provides excellent opportunities to investigate the short- and long-term test-retest reliability of the brain's structural and functional measurements at the regional, connectional and network levels, but also allows probing the test-retest reliability of structural-functional couplings in the human brain.

  1. Fatal pulmonary embolism following elective total knee replacement using aspirin in multi-modal prophylaxis - A 12year study.

    PubMed

    Quah, C; Bayley, E; Bhamber, N; Howard, P

    2017-10-01

    The National Institute for Health and Clinical Excellence (NICE) has issued guidelines on which thromboprophylaxis regimens are suitable following lower limb arthroplasty. Aspirin is not a recommended agent despite being accepted in orthopaedic guidelines elsewhere. We assessed the incidence of fatal pulmonary embolism (PE) and all-cause mortality following elective primary total knee replacement (TKR) with a standardised multi-modal prophylaxis regime in a large teaching district general hospital. We utilised a prospective audit database to identify those that had died within 42 and 90days postoperatively. Data from April 2000 to 2012 were analysed for 42 and 90day mortality rates. There were a total of 8277 elective primary TKR performed over the 12year period. The multi-modal prophylaxis regimen used unless contraindicated for all patients included 75mg aspirin once daily for four weeks. Case note review ascertained the causes of death. Where a patient had been referred to the coroner, they were contacted for post mortem results. The mortality rates at 42 and 90days were 0.36 and 0.46%. There was one fatal PE within 42days of surgery (0.01%) who was taking enoxaparin because of aspirin intolerance. Two fatal PE's occurred at 48 and 57days post-operatively (0.02%). The leading cause of death was myocardial infarction (0.13%). Fatal PE following elective TKR with a multi-modal prophylaxis regime is a very rare cause of mortality. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The effectiveness of multi modal representation text books to improve student's scientific literacy of senior high school students

    NASA Astrophysics Data System (ADS)

    Zakiya, Hanifah; Sinaga, Parlindungan; Hamidah, Ida

    2017-05-01

    The results of field studies showed the ability of science literacy of students was still low. One root of the problem lies in the books used in learning is not oriented toward science literacy component. This study focused on the effectiveness of the use of textbook-oriented provisioning capability science literacy by using multi modal representation. The text books development method used Design Representational Approach Learning to Write (DRALW). Textbook design which was applied to the topic of "Kinetic Theory of Gases" is implemented in XI grade students of high school learning. Effectiveness is determined by consideration of the effect and the normalized percentage gain value, while the hypothesis was tested using Independent T-test. The results showed that the textbooks which were developed using multi-mode representation science can improve the literacy skills of students. Based on the size of the effect size textbooks developed with representation multi modal was found effective in improving students' science literacy skills. The improvement was occurred in all the competence and knowledge of scientific literacy. The hypothesis testing showed that there was a significant difference on the ability of science literacy between class that uses textbooks with multi modal representation and the class that uses the regular textbook used in schools.

  3. A novel technique to incorporate structural prior information into multi-modal tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Ourselin, Sébastien; Hutton, Brian F.; Dobson, Katherine J.; Kaestner, Anders P.; Lionheart, William R. B.; Withers, Philip J.; Lee, Peter D.; Arridge, Simon R.

    2014-06-01

    There has been a rapid expansion of multi-modal imaging techniques in tomography. In biomedical imaging, patients are now regularly imaged using both single photon emission computed tomography (SPECT) and x-ray computed tomography (CT), or using both positron emission tomography and magnetic resonance imaging (MRI). In non-destructive testing of materials both neutron CT (NCT) and x-ray CT are widely applied to investigate the inner structure of material or track the dynamics of physical processes. The potential benefits from combining modalities has led to increased interest in iterative reconstruction algorithms that can utilize the data from more than one imaging mode simultaneously. We present a new regularization term in iterative reconstruction that enables information from one imaging modality to be used as a structural prior to improve resolution of the second modality. The regularization term is based on a modified anisotropic tensor diffusion filter, that has shape-adapted smoothing properties. By considering the underlying orientations of normal and tangential vector fields for two co-registered images, the diffusion flux is rotated and scaled adaptively to image features. The images can have different greyscale values and different spatial resolutions. The proposed approach is particularly good at isolating oriented features in images which are important for medical and materials science applications. By enhancing the edges it enables both easy identification and volume fraction measurements aiding segmentation algorithms used for quantification. The approach is tested on a standard denoising and deblurring image recovery problem, and then applied to 2D and 3D reconstruction problems; thereby highlighting the capabilities of the algorithm. Using synthetic data from SPECT co-registered with MRI, and real NCT data co-registered with x-ray CT, we show how the method can be used across a range of imaging modalities.

  4. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  5. Implementation of a multi-modal mobile sensor system for surface and subsurface assessment of roadways

    NASA Astrophysics Data System (ADS)

    Wang, Ming; Birken, Ralf; Shahini Shamsabadi, Salar

    2015-03-01

    There are more than 4 million miles of roads and 600,000 bridges in the United States alone. On-going investments are required to maintain the physical and operational quality of these assets to ensure public's safety and prosperity of the economy. Planning efficient maintenance and repair (M&R) operations must be armed with a meticulous pavement inspection method that is non-disruptive, is affordable and requires minimum manual effort. The Versatile Onboard Traffic Embedded Roaming Sensors (VOTERS) project developed a technology able to cost- effectively monitor the condition of roadway systems to plan for the right repairs, in the right place, at the right time. VOTERS technology consists of an affordable, lightweight package of multi-modal sensor systems including acoustic, optical, electromagnetic, and GPS sensors. Vehicles outfitted with this technology would be capable of collecting information on a variety of pavement-related characteristics at both surface and subsurface levels as they are driven. By correlating the sensors' outputs with the positioning data collected in tight time synchronization, a GIS-based control center attaches a spatial component to all the sensors' measurements and delivers multiple ratings of the pavement every meter. These spatially indexed ratings are then leveraged by VOTERS decision making modules to plan the optimum M&R operations and predict the future budget needs. In 2014, VOTERS inspection results were validated by comparing them to the outputs of recent professionally done condition surveys of a local engineering firm for 300 miles of Massachusetts roads. Success of the VOTERS project portrays rapid, intelligent, and comprehensive evaluation of tomorrow's transportation infrastructure to increase public's safety, vitalize the economy, and deter catastrophic failures.

  6. Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data.

    PubMed

    Yuan, Lei; Wang, Yalin; Thompson, Paul M; Narayan, Vaibhav A; Ye, Jieping

    2012-01-01

    Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI's 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results.

  7. Multi-modal anatomical optical coherence tomography and CT for in vivo dynamic upper airway imaging

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Santosh; Bu, Ruofei; Price, Hillel; Zdanski, Carlton; Oldenburg, Amy L.

    2017-02-01

    We describe a novel, multi-modal imaging protocol for validating quantitative dynamic airway imaging performed using anatomical Optical Coherence Tomography (aOCT). The aOCT system consists of a catheter-based aOCT probe that is deployed via a bronchoscope, while a programmable ventilator is used to control airway pressure. This setup is employed on the bed of a Siemens Biograph CT system capable of performing respiratory-gated acquisitions. In this arrangement the position of the aOCT catheter may be visualized with CT to aid in co-registration. Utilizing this setup we investigate multiple respiratory pressure parameters with aOCT, and respiratory-gated CT, on both ex vivo porcine trachea and live, anesthetized pigs. This acquisition protocol has enabled real-time measurement of airway deformation with simultaneous measurement of pressure under physiologically relevant static and dynamic conditions- inspiratory peak or peak positive airway pressures of 10-40 cm H2O, and 20-30 breaths per minute for dynamic studies. We subsequently compare the airway cross sectional areas (CSA) obtained from aOCT and CT, including the change in CSA at different stages of the breathing cycle for dynamic studies, and the CSA at different peak positive airway pressures for static studies. This approach has allowed us to improve our acquisition methodology and to validate aOCT measurements of the dynamic airway for the first time. We believe that this protocol will prove invaluable for aOCT system development and greatly facilitate translation of OCT systems for airway imaging into the clinical setting.

  8. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  9. Multi-modal MRI classifiers identify excessive alcohol consumption and treatment effects in the brain.

    PubMed

    Cosa, Alejandro; Moreno, Andrea; Pacheco-Torres, Jesús; Ciccocioppo, Roberto; Hyytiä, Petri; Sommer, Wolfgang H; Moratal, David; Canals, Santiago

    2017-09-01

    Robust neuroimaging markers of neuropsychiatric disorders have proven difficult to obtain. In alcohol use disorders, profound brain structural deficits can be found in severe alcoholic patients, but the heterogeneity of unimodal MRI measurements has so far precluded the identification of selective biomarkers, especially for early diagnosis. In the present work we used a combination of multiple MRI modalities to provide comprehensive and insightful descriptions of brain tissue microstructure. We performed a longitudinal experiment using Marchigian-Sardinian (msP) rats, an established model of chronic excessive alcohol consumption, and acquired multi-modal images before and after 1 month of alcohol consumption (6.8 ± 1.4 g/kg/day, mean ± SD), as well as after 1 week of abstinence with or without concomitant treatment with the antirelapse opioid antagonist naltrexone (2.5 mg/kg/day). We found remarkable sensitivity and selectivity to accurately classify brains affected by alcohol even after the relative short exposure period. One month drinking was enough to imprint a highly specific signature of alcohol consumption. Brain alterations were regionally specific and affected both gray and white matter and persisted into the early abstinence state without any detectable recovery. Interestingly, naltrexone treatment during early abstinence resulted in subtle brain changes that could be distinguished from non-treated abstinent brains, suggesting the existence of an intermediate state associated with brain recovery from alcohol exposure induced by medication. The presented framework is a promising tool for the development of biomarkers for clinical diagnosis of alcohol use disorders, with capacity to further inform about its progression and response to treatment. © 2016 Society for the Study of Addiction.

  10. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning.

    PubMed

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent; Wu, Guorong

    2017-07-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer's disease and Parkinson's disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    NASA Astrophysics Data System (ADS)

    Paelke, V.; Nebe, K.; Geiger, C.; Klompmaker, F.; Fischer, H.

    2012-07-01

    Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks) have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to control who is

  12. Multi-modal Learning-based Pre-operative Targeting in Deep Brain Stimulation Procedures.

    PubMed

    Liu, Yuan; Dawant, Benoit M

    2016-02-01

    Deep brain stimulation, as a primary surgical treatment for various neurological disorders, involves implanting electrodes to stimulate target nuclei within millimeter accuracy. Accurate pre-operative target selection is challenging due to the poor contrast in its surrounding region in MR images. In this paper, we present a learning-based method to automatically and rapidly localize the target using multi-modal images. A learning-based technique is applied first to spatially normalize the images in a common coordinate space. Given a point in this space, we extract a heterogeneous set of features that capture spatial and intensity contextual patterns at different scales in each image modality. Regression forests are used to learn a displacement vector of this point to the target. The target is predicted as a weighted aggregation of votes from various test samples, leading to a robust and accurate solution. We conduct five-fold cross validation using 100 subjects and compare our method to three indirect targeting methods, a state-of-the-art statistical atlas-based approach, and two variations of our method that use only a single modality image. With an overall error of 2.63±1.37mm, our method improves upon the single modality-based variations and statistically significantly outperforms the indirect targeting ones. Our technique matches state-of-the-art registration methods but operates on completely different principles. Both techniques can be used in tandem in processing pipelines operating on large databases or in the clinical flow for automated error detection.

  13. Virtual reality testing of multi-modal integration in schizophrenic patients.

    PubMed

    Sorkin, Anna; Peled, Avi; Weinshall, Daphna

    2005-01-01

    Our goal is to develop a new family of automatic tools for the diagnosis of schizophrenia, using Virtual Reality Technology (VRT). VRT is specifically suitable for this purpose, because it allows for multi-modal stimulation in a complex setup, and the simultaneous measurement of multiple parameters. In this work we studied sensory integration within working memory, in a navigation task through a VR maze. Along the way subjects pass through multiple rooms that include three doors each, only one of which can be used to legally exit the room. Specifically, each door is characterized by three features (color, shape and sound), and only one combination of features -- as determined by a transient opening rule -- is legal. The opening rule changes over time. Subjects must learn the rule and use it for successful navigation throughout the maze. 39 schizophrenic patients and 21 healthy controls participated in this study. Upon completion, each subject was assigned a performance profile, including various error scores, response time, navigation ability and strategy. We developed a classification procedure based on the subjects' performance profile, which correctly predicted 85% of the schizophrenic patients (and all the controls). We observed that a number of parameters showed significant correlation with standard diagnosis scores (PANSS), suggesting the potential use of our measurements for future diagnosis of schizophrenia. On the other hand, our patients did not show unusual repetition of response despite stimulus cessation (called perseveration in classical studies of schizophrenia), which is usually considered a robust marker of the disease. Interestingly, this deficit only appeared in our study when subjects did not receive proper explanation of the task.

  14. A NOVEL MULTI-MODAL DRUG REPURPOSING APPROACH FOR IDENTIFICATION OF POTENT ACK1 INHIBITORSǂ

    PubMed Central

    Phatak, Sharangdhar S.; Zhang, Shuxing

    2013-01-01

    Exploiting drug polypharmacology to identify novel modes of actions for drug repurposing has gained significant attentions in the current era of weak drug pipelines. From a serendipitous to systematic or rational ways, a variety of unimodal computational approaches have been developed but the complexity of the problem clearly needs multi-modal approaches for better solutions. In this study, we propose an integrative computational framework based on classical structure-based drug design and chemical-genomic similarity methods, combined with molecular graph theories for this task. Briefly, a pharmacophore modeling method was employed to guide the selection of docked poses resulting from our high-throughput virtual screening. We then evaluated if complementary results (hits missed by docking) can be obtained by using a novel chemo-genomic similarity approach based on chemical/sequence information. Finally, we developed a bipartite-graph based on the extensive data curation of DrugBank, PDB, and UniProt. This drug-target bipartite graph was used to assess similarity of different inhibitors based on their connections to other compounds and targets. The approaches were applied to the repurposing of existing drugs against ACK1, a novel cancer target significantly overexpressed in breast and prostate cancers during their progression. Upon screening of ~1,447 marketed drugs, a final set of 10 hits were selected for experimental testing. Among them, four drugs were identified as potent ACK1 inhibitors. Especially the inhibition of ACK1 by Dasatinib was as strong as IC50=1nM. We anticipate that our novel, integrative strategy can be easily extended to other biological targets with a more comprehensive coverage of known bio-chemical space for repurposing studies. PMID:23424109

  15. Multi-modal distraction. Using technology to combat pain in young children with burn injuries.

    PubMed

    Miller, Kate; Rodger, Sylvia; Bucolo, Sam; Greer, Ristan; Kimble, Roy M

    2010-08-01

    The use of non-pharmacological pain management remains adhoc within acute paediatric burns pain management protocols despite ongoing acknowledgement of its role. Advancements in adult based pain services including the integration of virtual reality has been adapted to meet the needs of children in pain, as exemplified by the development of multi-modal distraction (MMD). This easy to use, hand held interactive device uses customized programs designed to inform the child about the procedure he/she is about to experience and to distract the child during dressing changes. (1) To investigate if either MMD procedural preparation (MMD-PP) or distraction (MMD-D) has a greater impact on child pain reduction compared to standard distraction (SD) or hand held video game distraction (VG), (2) to understand the impact of MMD-PP and MMD-D on clinic efficiency by measuring length of treatment across groups, and lastly, (3) to assess the efficacy of distraction techniques over three dressing change procedures. A prospective randomised control trial was completed in a paediatric tertiary hospital Burns Outpatient Clinic. Eighty participants were recruited and studied over their first three dressing changes. Pain was assessed using validated child report, caregiver report, nursing observation and physiological measures. MMD-D and MMD-PP were both shown to significantly relieve reported pain (p

  16. Multiscale and multi-modality visualization of angiogenesis in a human breast cancer model.

    PubMed

    Cebulla, Jana; Kim, Eugene; Rhie, Kevin; Zhang, Jiangyang; Pathak, Arvind P

    2014-07-01

    Angiogenesis in breast cancer helps fulfill the metabolic demands of the progressing tumor and plays a critical role in tumor metastasis. Therefore, various imaging modalities have been used to characterize tumor angiogenesis. While micro-CT (μCT) is a powerful tool for analyzing the tumor microvascular architecture at micron-scale resolution, magnetic resonance imaging (MRI) with its sub-millimeter resolution is useful for obtaining in vivo vascular data (e.g. tumor blood volume and vessel size index). However, integration of these microscopic and macroscopic angiogenesis data across spatial resolutions remains challenging. Here we demonstrate the feasibility of 'multiscale' angiogenesis imaging in a human breast cancer model, wherein we bridge the resolution gap between ex vivo μCT and in vivo MRI using intermediate resolution ex vivo MR microscopy (μMRI). To achieve this integration, we developed suitable vessel segmentation techniques for the ex vivo imaging data and co-registered the vascular data from all three imaging modalities. We showcase two applications of this multiscale, multi-modality imaging approach: (1) creation of co-registered maps of vascular volume from three independent imaging modalities, and (2) visualization of differences in tumor vasculature between viable and necrotic tumor regions by integrating μCT vascular data with tumor cellularity data obtained using diffusion-weighted MRI. Collectively, these results demonstrate the utility of 'mesoscopic' resolution μMRI for integrating macroscopic in vivo MRI data and microscopic μCT data. Although focused on the breast tumor xenograft vasculature, our imaging platform could be extended to include additional data types for a detailed characterization of the tumor microenvironment and computational systems biology applications.

  17. A custom multi-modal sensor suite and data analysis pipeline for aerial field phenotyping

    NASA Astrophysics Data System (ADS)

    Bartlett, Paul W.; Coblenz, Lauren; Sherwin, Gary; Stambler, Adam; van der Meer, Andries

    2017-05-01

    Our group has developed a custom, multi-modal sensor suite and data analysis pipeline to phenotype crops in the field using unpiloted aircraft systems (UAS). This approach to high-throughput field phenotyping is part of a research initiative intending to markedly accelerate the breeding process for refined energy sorghum varieties. To date, single rotor and multirotor helicopters, roughly 14 kg in total weight, are being employed to provide sensor coverage over multiple hectaresized fields in tens of minutes. The quick, autonomous operations allow for complete field coverage at consistent plant and lighting conditions, with low operating costs. The sensor suite collects data simultaneously from six sensors and registers it for fusion and analysis. High resolution color imagery targets color and geometric phenotypes, along with lidar measurements. Long-wave infrared imagery targets temperature phenomena and plant stress. Hyperspectral visible and near-infrared imagery targets phenotypes such as biomass and chlorophyll content, as well as novel, predictive spectral signatures. Onboard spectrometers and careful laboratory and in-field calibration techniques aim to increase the physical validity of the sensor data throughout and across growing seasons. Off-line processing of data creates basic products such as image maps and digital elevation models. Derived data products include phenotype charts, statistics, and trends. The outcome of this work is a set of commercially available phenotyping technologies, including sensor suites, a fully integrated phenotyping UAS, and data analysis software. Effort is also underway to transition these technologies to farm management users by way of streamlined, lower cost sensor packages and intuitive software interfaces.

  18. Random forest-based similarity measures for multi-modal classification of Alzheimer’s disease

    PubMed Central

    Gray, Katherine R.; Aljabar, Paul; Heckemann, Rolf A.; Hammers, Alexander; Rueckert, Daniel

    2012-01-01

    Neurodegenerative disorders, such as Alzheimer’s disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multimodality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs classification based on any individual modality for comparisons between Alzheimer’s disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer’s disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differ by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the

  19. A Prototype Multi-Modality Picture Archive And Communication System At Victoria General Hospital

    NASA Astrophysics Data System (ADS)

    Nosil, J.; Justice, G.; Fisher, P.; Ritchie, G.; Weigl, W. J.; Gnoyke, H.

    1988-06-01

    The Medical Imaging Department at Victoria General Hospital is the first in Canada to implement an integrated multi-modality picture archive and communication system for clinical use. The aim of this paper is to present the current status of the picture archive and communication system components and to describe its function. This system was installed in April of 1987, and upgraded in November of 1987. A picture archive and communication system includes image sources, an image management system, and image display and reporting facilities. The installed image sources (digital radiography, digital fluoroscopy, computed tomography, and digital subtraction angiography) provide digital data for the image management system. The image management system provides facilities for receiving, storing, retrieving, and transmitting images using conventional computers and networks. There are two display stations, a viewing console and an image processing workstation, which provide various image display and manipulation functions. In parallel with the implementation of the picture archive and communication system there are clinical, physical, and economic evaluations being pursued. An initial examination of digital image transfer rates indicate that users will experience similar image availability times as with conventional film imaging. Clinical experience to date with the picture archive and communication system has been limited to that required to evaluate digital imaging as a diagnostic tool, using digital radiography and digital fluoroscopy studies. Computed tomography and digital subtraction angiography have only recently been connected to the picture archive and communication system. Clinical experience with these modalities is limited to several cases, but image fidelity appears to be well above clinically acceptable levels.

  20. Multi-modal iterative adaptive processing (MIAP) performance in the discrimination mode for landmine detection

    NASA Astrophysics Data System (ADS)

    Yu, Yongli; Collins, Leslie M.

    2005-06-01

    Due to the nature of landmine detection, a high detection probability (Pd) is required to avoid casualties and injuries. However, high Pd is often obtained at the price of extremely high false alarm rates. It is widely accepted that no single sensor technology has the ability to achieve the required detection rate while keeping acceptably low false alarm rates for all types of mines in all types of soil and with all types of false targets. Remarkable advances in sensor technology for landmine detection have made multi-sensor fusion an attractive alternative to single sensor detection techniques. Hence, multi-sensor fusion mine detection systems, which use complementary sensor technologies, are proposed. Previously we proposed a new multi-sensor fusion algorithm called Multi-modal Iterative Adaptive Processing (MIAP), which incorporates information from multiple sensors in an adaptive Bayesian decision framework and the identification capabilities of multiple sensors are utilized to modify the statistical models utilized by the mine detector. Simulation results demonstrate the improvement in performance obtained using the MIAP algorithm. In this paper, we assume a hand-held mine detection system utilizing both an electromagnetic induction sensor (EMI) and a ground-penetrating radar (GPR). The hand-held mine detection sensors are designed to have two modes of operations: search mode and discrimination mode. Search mode generates an initial causal detection on the suspected location; and discrimination mode confirms whether there is a mine. The MIAP algorithm is applied in the discrimination mode for hand-held mine detection. The performance of the detector is evaluated on a data set collected by the government, and the performance is compared with the other traditional fusion results.

  1. A multi-modal prostate segmentation scheme by combining spectral clustering and active shape models

    NASA Astrophysics Data System (ADS)

    Toth, Robert; Tiwari, Pallavi; Rosen, Mark; Kalyanpur, Arjun; Pungavkar, Sona; Madabhushi, Anant

    2008-03-01

    Segmentation of the prostate boundary on clinical images is useful in a large number of applications including calculating prostate volume during biopsy, tumor estimation, and treatment planning. Manual segmentation of the prostate boundary is, however, time consuming and subject to inter- and intra-reader variability. Magnetic Resonance (MR) imaging (MRI) and MR Spectroscopy (MRS) have recently emerged as promising modalities for detection of prostate cancer in vivo. In this paper we present a novel scheme for accurate and automated prostate segmentation on in vivo 1.5 Tesla multi-modal MRI studies. The segmentation algorithm comprises two steps: (1) A hierarchical unsupervised spectral clustering scheme using MRS data to isolate the region of interest (ROI) corresponding to the prostate, and (2) an Active Shape Model (ASM) segmentation scheme where the ASM is initialized within the ROI obtained in the previous step. The hierarchical MRS clustering scheme in step 1 identifies spectra corresponding to locations within the prostate in an iterative fashion by discriminating between potential prostate and non-prostate spectra in a lower dimensional embedding space. The spatial locations of the prostate spectra so identified are used as the initial ROI for the ASM. The ASM is trained by identifying user-selected landmarks on the prostate boundary on T2 MRI images. Boundary points on the prostate are identified using mutual information (MI) as opposed to the traditional Mahalanobis distance, and the trained ASM is deformed to fit the boundary points so identified. Cross validation on 150 prostate MRI slices yields an average segmentation sensitivity, specificity, overlap, and positive predictive value of 89, 86, 83, and 93% respectively. We demonstrate that the accurate initialization of the ASM via the spectral clustering scheme is necessary for automated boundary extraction. Our method is fully automated, robust to system parameters, and computationally efficient.

  2. The Case for Open Source Software in Digital Forensics

    NASA Astrophysics Data System (ADS)

    Zanero, Stefano; Huebner, Ewa

    In this introductory chapter we discuss the importance of the use of open source software (OSS), and in particular of free software (FLOSS) in computer forensics investigations including the identification, capture, preservation and analysis of digital evidence; we also discuss the importance of OSS in computer forensics

  3. Open Source Projects in Software Engineering Education: A Mapping Study

    ERIC Educational Resources Information Center

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  4. Open source tools for ATR development and performance evaluation

    NASA Astrophysics Data System (ADS)

    Baumann, James M.; Dilsavor, Ronald L.; Stubbles, James; Mossing, John C.

    2002-07-01

    Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.

  5. Chinese Localisation of Evergreen: An Open Source Integrated Library System

    ERIC Educational Resources Information Center

    Zou, Qing; Liu, Guoying

    2009-01-01

    Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…

  6. Open Source Solutions for Libraries: ABCD vs Koha

    ERIC Educational Resources Information Center

    Macan, Bojan; Fernandez, Gladys Vanesa; Stojanovski, Jadranka

    2013-01-01

    Purpose: The purpose of this study is to present an overview of the two open source (OS) integrated library systems (ILS)--Koha and ABCD (ISIS family), to compare their "next-generation library catalog" functionalities, and to give comparison of other important features available through ILS modules. Design/methodology/approach: Two open source…

  7. Faculty/Student Surveys Using Open Source Software

    ERIC Educational Resources Information Center

    Kaceli, Sali

    2004-01-01

    This session will highlight an easy survey package which lets non-technical users create surveys, administer surveys, gather results, and view statistics. This is an open source application all managed online via a web browser. By using phpESP, the faculty is given the freedom of creating various surveys at their convenience and link them to their…

  8. The Value of Open Source Software Tools in Qualitative Research

    ERIC Educational Resources Information Center

    Greenberg, Gary

    2011-01-01

    In an era of global networks, researchers using qualitative methods must consider the impact of any software they use on the sharing of data and findings. In this essay, I identify researchers' main areas of concern regarding the use of qualitative software packages for research. I then examine how open source software tools, wherein the publisher…

  9. Current challenges in open-source bioimage informatics.

    PubMed

    Cardona, Albert; Tomancak, Pavel

    2012-06-28

    We discuss the advantages and challenges of the open-source strategy in biological image analysis and argue that its full impact will not be realized without better support and recognition of software engineers' contributions to the biological sciences and more support of this development model from funders and institutions.

  10. Chinese Localisation of Evergreen: An Open Source Integrated Library System

    ERIC Educational Resources Information Center

    Zou, Qing; Liu, Guoying

    2009-01-01

    Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…

  11. Open Source Projects in Software Engineering Education: A Mapping Study

    ERIC Educational Resources Information Center

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  12. [Osirix: free and open-source software for medical imagery].

    PubMed

    Jalbert, F; Paoli, J R

    2008-02-01

    Osirix is a tool for diagnostic imagery, teaching and research tasks, which presents many possible applications in maxillofacial and oral surgery. It is a free and open-source software developed on Mac OS X (Apple) by Dr Antoine Rosset and Dr Osman Ratib, in the department of radiology and medical computing of Geneva (Switzerland).

  13. Is Open Source the ERP Cure-All?

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  14. Is Open Source the ERP Cure-All?

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  15. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  16. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  17. Higher Education Sub-Cultures and Open Source Adoption

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  18. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  19. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  20. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  1. Critical Analysis on Open Source LMSs Using FCA

    ERIC Educational Resources Information Center

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  2. Open Source Software: Fully Featured vs. "The Devil You Know"

    ERIC Educational Resources Information Center

    Hotrum, Michael; Ludwig, Brian; Baggaley, Jon

    2005-01-01

    The "ILIAS" learning management system (LMS) was evaluated, following its favourable rating in an independent evaluation study of open source software (OSS) products. The current review found "ILIAS" to have numerous features of value to distance education (DE) students and teachers, as well as problems for consideration in the…

  3. Open Source Solutions for Libraries: ABCD vs Koha

    ERIC Educational Resources Information Center

    Macan, Bojan; Fernandez, Gladys Vanesa; Stojanovski, Jadranka

    2013-01-01

    Purpose: The purpose of this study is to present an overview of the two open source (OS) integrated library systems (ILS)--Koha and ABCD (ISIS family), to compare their "next-generation library catalog" functionalities, and to give comparison of other important features available through ILS modules. Design/methodology/approach: Two open source…

  4. Critical Analysis on Open Source LMSs Using FCA

    ERIC Educational Resources Information Center

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  5. Higher Education Sub-Cultures and Open Source Adoption

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  6. The Value of Open Source Software Tools in Qualitative Research

    ERIC Educational Resources Information Center

    Greenberg, Gary

    2011-01-01

    In an era of global networks, researchers using qualitative methods must consider the impact of any software they use on the sharing of data and findings. In this essay, I identify researchers' main areas of concern regarding the use of qualitative software packages for research. I then examine how open source software tools, wherein the publisher…

  7. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  8. Teaching Undergraduate Software Engineering Using Open Source Development Tools

    DTIC Science & Technology

    2012-01-01

    on Computer Science Education (SIGCSE 󈧏), 153- 158. Pandey, R. (2009). Exploiting web resources for teaching /learning best software design tips...Issues in Informing Science and Information Technology Volume 9, 2012 Teaching Undergraduate Software Engineering Using Open Source Development...multi-course sequence, to teach students both the theoretical concepts of soft- ware development as well as the practical aspects of developing software

  9. Bioclipse: an open source workbench for chemo- and bioinformatics

    PubMed Central

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl ES

    2007-01-01

    Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at . PMID:17316423

  10. NASA's Open Source Software for Serving and Viewing Global Imagery

    NASA Astrophysics Data System (ADS)

    Roberts, J. T.; Alarcon, C.; Boller, R. A.; Cechini, M. F.; Gunnoe, T.; Hall, J. R.; Huang, T.; Ilavajhala, S.; King, J.; McGann, M.; Murphy, K. J.; Plesea, L.; Schmaltz, J. E.; Thompson, C. K.

    2014-12-01

    The NASA Global Imagery Browse Services (GIBS), which provide open access to an enormous archive of historical and near real time imagery from NASA supported satellite instruments, has also released most of its software to the general public as open source. The software packages, originally developed at the Jet Propulsion Laboratory and Goddard Space Flight Center, currently include: 1) the Meta Raster Format (MRF) GDAL driver—GDAL support for a specialized file format used by GIBS to store imagery within a georeferenced tile pyramid for exceptionally fast access; 2) OnEarth—a high performance Apache module used to serve tiles from MRF files via common web service protocols; 3) Worldview—a web mapping client to interactively browse global, full-resolution satellite imagery and download underlying data. Examples that show developers how to use GIBS with various mapping libraries and programs are also available. This stack of tools is intended to provide an out-of-the-box solution for serving any georeferenced imagery.Scientists as well as the general public can use the open source software for their own applications such as developing visualization interfaces for improved scientific understanding and decision support, hosting a repository of browse images to help find and discover satellite data, or accessing large datasets of geo-located imagery in an efficient manner. Open source users may also contribute back to NASA and the wider Earth Science community by taking an active role in evaluating and developing the software.This presentation will discuss the experiences of developing the software in an open source environment and useful lessons learned. To access the open source software repositories, please visit: https://github.com/nasa-gibs/

  11. Bioclipse: an open source workbench for chemo- and bioinformatics.

    PubMed

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl E S

    2007-02-22

    There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no successful attempts have been made to integrate chemo- and bioinformatics into a single framework. Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  12. Open source drug discovery in practice: a case study.

    PubMed

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality research at low cost. The critical success factors appear to be clearly

  13. Open Source Drug Discovery in Practice: A Case Study

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Background Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. Methodology/Principal Findings A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Conclusions/Significance Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality

  14. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  15. Cross-Platform Mobile Application Development: A Pattern-Based Approach

    DTIC Science & Technology

    2012-03-01

    Application Programming Interface GUI Graphical User Interface HTML5 HyperText Markup Language version 5 IB Interface Builder IDE Integrated...to the cross-platform programming domain is the use of Web Applications. With the current shift of the Internet to HTML5 , the mobile device’s web

  16. CROPPER: a metagene creator resource for cross-platform and cross-species compendium studies

    PubMed Central

    Paananen, Jussi; Storvik, Markus; Wong, Garry

    2006-01-01

    Background Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies. Results We have developed a web-based software resource, called CROPPER that uses the latest genomic information concerning different data identifiers and orthologous genes from the Ensembl database. CROPPER can be used to combine genomic data from different heterogeneous sources, allowing researchers to perform cross-platform/cross-species compendium studies without the need for complex computational tools or the requirement of setting up one's own in-house database. We also present an example of a simple cross-platform/cross-species compendium study based on publicly available Parkinson's disease data derived from different sources. Conclusion CROPPER is a user-friendly and freely available web-based software resource that can be successfully used for cross-species/cross-platform compendium studies. PMID:16995941

  17. Transforming High School Classrooms with Free/Open Source Software: "It's Time for an Open Source Software Revolution"

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2008-01-01

    Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…

  18. Beyond Open Source: According to Jim Hirsch, Open Technology, Not Open Source, Is the Wave of the Future

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    This article presents an interview with Jim Hirsch, an associate superintendent for technology at Piano Independent School District in Piano, Texas. Hirsch serves as a liaison for the open technologies committee of the Consortium for School Networking. In this interview, he shares his opinion on the significance of open source in K-12.

  19. Open Source and ROI: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    A switch to free open source software can minimize cost and allow funding to be diverted to equipment and other programs. For instance, the OpenOffice suite is an alternative to expensive basic application programs offered by major vendors. Many such programs on the market offer features seldom used in education but for which educators must pay.…

  20. Transforming High School Classrooms with Free/Open Source Software: "It's Time for an Open Source Software Revolution"

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2008-01-01

    Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…

  1. Optimizing boundary detection via Simulated Search with applications to multi-modal heart segmentation.

    PubMed

    Peters, J; Ecabert, O; Meyer, C; Kneser, R; Weese, J

    2010-02-01

    Segmentation of medical images can be achieved with the help of model-based algorithms. Reliable boundary detection is a crucial component to obtain robust and accurate segmentation results and to enable full automation. This is especially important if the anatomy being segmented is too variable to initialize a mean shape model such that all surface regions are close to the desired contours. Several boundary detection algorithms are widely used in the literature. Most use some trained image appearance model to characterize and detect the desired boundaries. Although parameters of the boundary detection can vary over the model surface and are trained on images, their performance (i.e., accuracy and reliability of boundary detection) can only be assessed as an integral part of the entire segmentation algorithm. In particular, assessment of boundary detection cannot be done locally and independently on model parameterization and internal energies controlling geometric model properties. In this paper, we propose a new method for the local assessment of boundary detection called Simulated Search. This method takes any boundary detection function and evaluates its performance for a single model landmark in terms of an estimated geometric boundary detection error. In consequence, boundary detection can be optimized per landmark during model training. We demonstrate the success of the method for cardiac image segmentation. In particular we show that the Simulated Search improves the capture range and the accuracy of the boundary detection compared to a traditional training scheme. We also illustrate how the Simulated Search can be used to identify suitable classes of features when addressing a new segmentation task. Finally, we show that the Simulated Search enables multi-modal heart segmentation using a single algorithmic framework. On computed tomography and magnetic resonance images, average segmentation errors (surface-to-surface distances) for the four chambers and

  2. Clinical Evaluation of a Multi-Modal Facial Serum That Addresses Hyaluronic Acid Levels in Skin.

    PubMed

    Raab, Susana; Yatskayer, Margarita; Lynch, Stephen; Manco, Megan; Oresajo, Christian

    2017-09-01

    Hyaluronic acid (HA), the major glycosaminoglycan present in the human skin, is a key contributor to water retention and mechanical support in skin. The level, size, and functionality of cutaneous HA are known to diminish with age. Topical treatments designed to increase the HA content of skin have been met with limited success. The purpose of this study was to evaluate the tolerance and efficacy of a multi-modal facial serum containing HA, Proxylane (C-Xyloside), purple rice extract, and dipotassium glycyrrhizate in addressing HA levels in skin. A 12-week, single center, clinical study was conducted on 59 women with mild to moderate photodamage. Clinical grading to assess the efficacy and tolerability was conducted on the face at baseline and at weeks 4, 8, and 12. Bioinstrumentation measurements were taken, including corneometer, tewameter, ultrasound, and standardized digital imaging. A randomized subset of 20 subjects from the study population had 3 mm punch biopsies collected for quantitative RT-PCR analysis from 2 sites on the face at baseline and week 12. Additionally, a 4-week, single center, clinical study was conducted on the photodamaged forearms of 12 subjects. At both baseline and week 4, a 4 mm punch biopsy was obtained from the subjects' randomized forearms. Biopsy samples were subjected to immunohistochemical staining and analysis of HA content. Statistically-significant improvements in all facial skin attributes (weeks 4, 8, and 12), stratum corneum hydration (week 12), and transepidermal water loss (week 12) were observed. Tolerability was excellent, with no increases in irritation parameters noted. A significant increase of HA content in skin after 4 weeks of treatment was observed. By PCR analysis, there was a significant increase in hyaluronan synthase 2, as well as a significant increase in collagen type 1a1 after 12 weeks of application. The findings suggest that this novel topical facial serum is capable of stimulating HA and skin

  3. An Open-Source and Java-Technologies Approach to Web Applications

    DTIC Science & Technology

    2003-09-01

    currently being replaced by open-source. This thesis explores using open-source and Java technologies to implement Web applications. A prototype of the...currently being replaced by open-source. This thesis explores using open-source and Java technologies to implement Web applications. A prototype of the

  4. Virtual Machine for Computer Forensics - the Open Source Perspective

    NASA Astrophysics Data System (ADS)

    Bem, Derek

    In this paper we discuss the potential role of virtual environments in the analysis phase of computer forensics investigations. We argue that commercial closed source computer forensics software has certain limitations, and we propose a method which may lead to gradual shift to open source software (OSS). A brief overview of virtual environments and open source software tools is presented and discussed. Further we identify current limitations of virtual environments leading to the conclusion that the method is very promising, but at this point in time it can not replace conventional techniques of computer forensics analysis. We demonstrate that using Virtual Machines (VM) in Linux environments can complement the conventional techniques, and often can bring faster and verifiable results not dependent on proprietary, close source tools.

  5. BioJava: an open-source framework for bioinformatics.

    PubMed

    Holland, R C G; Down, T A; Pocock, M; Prlić, A; Huen, D; James, K; Foisy, S; Dräger, A; Yates, A; Heuer, M; Schreiber, M J

    2008-09-15

    BioJava is a mature open-source project that provides a framework for processing of biological data. BioJava contains powerful analysis and statistical routines, tools for parsing common file formats and packages for manipulating sequences and 3D structures. It enables rapid bioinformatics application development in the Java programming language. BioJava is an open-source project distributed under the Lesser GPL (LGPL). BioJava can be downloaded from the BioJava website (http://www.biojava.org). BioJava requires Java 1.5 or higher. All queries should be directed to the BioJava mailing lists. Details are available at http://biojava.org/wiki/BioJava:MailingLists.

  6. Computer aided die design: A new open-source methodology

    NASA Astrophysics Data System (ADS)

    Carneiro, Olga Sousa; Rajkumar, Ananth; Ferrás, Luís Lima; Fernandes, Célio; Sacramento, Alberto; Nóbrega, João Miguel

    2017-05-01

    In this work we present a detailed description of how to use open source based computer codes to aid the design of complex profile extrusion dies, aiming to improve its flow distribution. The work encompasses the description of the overall open-source die design methodology, the implementation of the energy conservation equation in an existing OpenFOAM® solver, which will be then capable of simulating the steady non-isothermal flow of an incompressible generalized Newtonian fluid, and two case studies to illustrate the capabilities and practical usefulness of the developed methodology. The results obtained with these case studies, used to solve real industrial problems, demonstrate that the computational design aid is an excellent alternative, from economical and technical points of view, to the experimental trial-and-error procedure commonly used in industry.

  7. Open source, open standards, and health care information systems.

    PubMed

    Reynolds, Carl J; Wyatt, Jeremy C

    2011-02-17

    Recognition of the improvements in patient safety, quality of patient care, and efficiency that health care information systems have the potential to bring has led to significant investment. Globally the sale of health care information systems now represents a multibillion dollar industry. As policy makers, health care professionals, and patients, we have a responsibility to maximize the return on this investment. To this end we analyze alternative licensing and software development models, as well as the role of standards. We describe how licensing affects development. We argue for the superiority of open source licensing to promote safer, more effective health care information systems. We claim that open source licensing in health care information systems is essential to rational procurement strategy.

  8. Comparison of open-source visual analytics toolkits

    NASA Astrophysics Data System (ADS)

    Harger, John R.; Crossno, Patricia J.

    2012-01-01

    We present the results of the first stage of a two-stage evaluation of open source visual analytics packages. This stage is a broad feature comparison over a range of open source toolkits. Although we had originally intended to restrict ourselves to comparing visual analytics toolkits, we quickly found that very few were available. So we expanded our study to include information visualization, graph analysis, and statistical packages. We examine three aspects of each toolkit: visualization functions, analysis capabilities, and development environments. With respect to development environments, we look at platforms, language bindings, multi-threading/parallelism, user interface frameworks, ease of installation, documentation, and whether the package is still being actively developed.

  9. Open Source, Open Standards, and Health Care Information Systems

    PubMed Central

    2011-01-01

    Recognition of the improvements in patient safety, quality of patient care, and efficiency that health care information systems have the potential to bring has led to significant investment. Globally the sale of health care information systems now represents a multibillion dollar industry. As policy makers, health care professionals, and patients, we have a responsibility to maximize the return on this investment. To this end we analyze alternative licensing and software development models, as well as the role of standards. We describe how licensing affects development. We argue for the superiority of open source licensing to promote safer, more effective health care information systems. We claim that open source licensing in health care information systems is essential to rational procurement strategy. PMID:21447469

  10. An open source model for open access journal publication.

    PubMed

    Blesius, Carl R; Williams, Michael A; Holzbach, Ana; Huntley, Arthur C; Chueh, Henry

    2005-01-01

    We describe an electronic journal publication infrastructure that allows a flexible publication workflow, academic exchange around different forms of user submissions, and the exchange of articles between publishers and archives using a common XML based standard. This web-based application is implemented on a freely available open source software stack. This publication demonstrates the Dermatology Online Journal's use of the platform for non-biased independent open access publication.

  11. GISCube, an Open Source Web-based GIS Application

    NASA Astrophysics Data System (ADS)

    Boustani, M.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    There are many Earth science projects and data systems being developed at the Jet Propulsion Laboratory, California Institute of Technology (JPL) that require the use of Geographic Information Systems (GIS). Three in particular are: (1) the JPL Airborne Snow Observatory (ASO) that measures the amount of water being generated from snow melt in mountains; (2) the Regional Climate Model Evaluation System (RCMES) that compares climate model outputs with remote sensing datasets in the context of model evaluation and the Intergovernmental Panel on Climate Change and for the U.S. National Climate Assessment and; (3) the JPL Snow Server that produces a snow and ice climatology for the Western US and Alaska, for the U.S. National Climate Assessment. Each of these three examples and all other earth science projects are strongly in need of having GIS and geoprocessing capabilities to process, visualize, manage and store GeoSpatial data. Beside some open source GIS libraries and some software like ArcGIS there are comparatively few open source, web-based and easy to use application that are capable of doing GIS processing and visualization. To address this, we present GISCube, an open source web-based GIS application that can store, visualize and process GIS and GeoSpatial data. GISCube is powered by Geothon, an open source python GIS cookbook. Geothon has a variety of Geoprocessing tools such data conversion, processing, spatial analysis and data management tools. GISCube has the capability of supporting a variety of well known GIS data formats in both vector and raster formats, and the system is being expanded to support NASA's and scientific data formats such as netCDF and HDF files. In this talk, we demonstrate how Earth science and other projects can benefit by using GISCube and Geothon, its current goals and our future work in the area.

  12. Open Source Software For Patient Data Management In Critical Care.

    PubMed

    Massaut, Jacques; Charretk, Nicolas; Gayraud, Olivia; Van Den Bergh, Rafael; Charles, Adelin; Edema, Nathalie

    2015-01-01

    We have previously developed a Patient Data Management System for Intensive Care based on Open Source Software. The aim of this work was to adapt this software to use in Emergency Departments in low resource environments. The new software includes facilities for utilization of the South African Triage Scale and prediction of mortality based on independent predictive factors derived from data from the Tabarre Emergency Trauma Center in Port au Prince, Haiti.

  13. Open source tools for large-scale neuroscience.

    PubMed

    Freeman, Jeremy

    2015-06-01

    New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  14. DESIGN NOTE: SCOUT - Surface Characterization Open-Source Universal Toolbox

    NASA Astrophysics Data System (ADS)

    Sacerdotti, F.; Porrino, A.; Butler, C.; Brinkmann, S.; Vermeulen, M.

    2002-02-01

    Surface topography plays a significant role in functional performance situations like friction, lubrication and wear. A European Community funded research programme on areal characterization of steel sheet has recently assisted research in this area. This article is dedicated to the software that supported most of the programme. Born as a rudimentary collection of procedures, it grew steadily to become an integrated package, later equipped with a graphical interface and circulated to the research community employing the Open-Source philosophy.

  15. Open Source Intelligence - Doctrine’s Neglected Child

    DTIC Science & Technology

    2007-11-02

    1 Richard S . Friedman, “Open Source Intelligence,” Parameters (Summer 1998): 159; quoted in David Reed, “Aspiring to Spying...Richard S . Friedman, 162-163. 11 Wyn Bowen, 52. 12 Richard S . Friedman, 164; quoted in Ray Cline, “Introduction,” The Intelligence War (London...evacuation operations, counter-terrorist operations, foreign internal defense, peace operations, consequence management, and humanitarian assistance

  16. Cassandra: An open source Monte Carlo package for molecular simulation.

    PubMed

    Shah, Jindal K; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Keene, Brian P; Khan, Sandip; Paluch, Andrew S; Rai, Neeraj; Romanielo, Lucienne L; Rosch, Thomas W; Yoo, Brian; Maginn, Edward J

    2017-07-15

    Cassandra is an open source atomistic Monte Carlo software package that is effective in simulating the thermodynamic properties of fluids and solids. The different features and algorithms used in Cassandra are described, along with implementation details and theoretical underpinnings to various methods used. Benchmark and example calculations are shown, and information on how users can obtain the package and contribute to it are provided. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. How Open Source Can Still Save the World

    NASA Astrophysics Data System (ADS)

    Behlendorf, Brian

    Many of the worlds’ major problems - economic distress, natural disaster responses, broken health care systems, education crises, and more - are not fundamentally information technology issues. However, in every case mentioned and more, there exist opportunities for Open Source software to uniquely change the way we can address these problems. At times this is about addressing a need for which no sufficient commercial market exists. For others, it is in the way Open Source licenses free the recipient from obligations to the creators, creating a relationship of mutual empowerment rather than one of dependency. For yet others, it is in the way the open collaborative processes that form around Open Source software provide a neutral ground for otherwise competitive parties to find a greatest common set of mutual needs to address together rather than in parallel. Several examples of such software exist today and are gaining traction. Governments, NGOs, and businesses are beginning to recognize the potential and are organizing to meet it. How far can this be taken?

  18. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  19. Open-source, community-driven microfluidics with Metafluidics.

    PubMed

    Kong, David S; Thorsen, Todd A; Babb, Jonathan; Wick, Scott T; Gam, Jeremy J; Weiss, Ron; Carr, Peter A

    2017-06-07

    Microfluidic devices have the potential to automate and miniaturize biological experiments, but open-source sharing of device designs has lagged behind sharing of other resources such as software. Synthetic biologists have used microfluidics for DNA assembly, cell-free expression, and cell culture, but a combination of expense, device complexity, and reliance on custom set-ups hampers their widespread adoption. We present Metafluidics, an open-source, community-driven repository that hosts digital design files, assembly specifications, and open-source software to enable users to build, configure, and operate a microfluidic device. We use Metafluidics to share designs and fabrication instructions for both a microfluidic ring-mixer device and a 32-channel tabletop microfluidic controller. This device and controller are applied to build genetic circuits using standard DNA assembly methods including ligation, Gateway, Gibson, and Golden Gate. Metafluidics is intended to enable a broad community of engineers, DIY enthusiasts, and other nontraditional participants with limited fabrication skills to contribute to microfluidic research.

  20. Open-Source Automated Mapping Four-Point Probe

    PubMed Central

    Chandra, Handy; Allen, Spencer W.; Oberloier, Shane W.; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M.

    2017-01-01

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas. PMID:28772471

  1. Free and open-source automated 3-D microscope.

    PubMed

    Wijnen, Bas; Petersen, Emily E; Hunt, Emily J; Pearce, Joshua M

    2016-11-01

    Open-source technology not only has facilitated the expansion of the greater research community, but by lowering costs it has encouraged innovation and customizable design. The field of automated microscopy has continued to be a challenge in accessibility due the expense and inflexible, noninterchangeable stages. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10 μm at the maximum speed and 27 μm at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. It is larger than all commercial alternatives, and is thus capable of high-depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3-9% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  2. Open-Source Automated Mapping Four-Point Probe.

    PubMed

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  3. Development of an Open-Source, Discrete Element Knee Model.

    PubMed

    Schmitz, Anne; Piovesan, Davide

    2016-10-01

    Biomechanical modeling is an important tool in that it can provide estimates of forces that cannot easily be measured (e.g., soft tissue loads). The goal of this study was to develop a discrete element model of the knee that is open source to allow for utilization of modeling by a wider audience of researchers. A six degree-of-freedom tibiofemoral and one degree-of-freedom patellofemoral joint were created in OpenSim. Eighteen ligament bundles and tibiofemoral contact were included in the model. During a passive flexion movement, maximum deviation of the model from the literature occurred at the most flexed angle with deviations of 2° adduction, 7° internal rotation, 1-mm posterior translation, 12-mm inferior translation, and 4-mm lateral translation. Similarly, the overall elongation of the ligaments agreed with literature values with strains of less than 13%. These results provide validation of the physiological relevance of the model. This model is one of the few open source, discrete element knee models to date, and has many potential applications, one being for use in an open-source cosimulation framework.

  4. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy

    PubMed Central

    Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk

    2010-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors

  5. Microfluidic electro-sonoporation: a multi-modal cell poration methodology through simultaneous application of electric field and ultrasonic wave.

    PubMed

    Longsine-Parker, Whitney; Wang, Han; Koo, Chiwan; Kim, Jeongyun; Kim, Beomjoon; Jayaraman, Arul; Han, Arum

    2013-06-07

    A microfluidic device that simultaneously applies the conditions required for microelectroporation and microsonoporation in a flow-through scheme toward high-efficiency and high-throughput molecular delivery into mammalian cells is presented. This multi-modal poration microdevice using simultaneous application of electric field and ultrasonic wave was realized by a three-dimensional (3D) microelectrode scheme where the electrodes function as both electroporation electrodes and cell flow channel so that acoustic wave can be applied perpendicular to the electric field simultaneously to cells flowing through the microfluidic channel. This 3D microelectrode configuration also allows a uniform electric field to be applied while making the device compatible with fluorescent microscopy. It is hypothesized that the simultaneous application of two different fields (electric field and acoustic wave) in perpendicular directions allows formation of transient pores along two axes of the cell membrane at reduced poration intensities, hence maximizing the delivery efficiency while minimizing cell death. The microfluidic electro-sonoporation system was characterized by delivering small molecules into mammalian cells, and showed average poration efficiency of 95.6% and cell viability of 97.3%. This proof of concept result shows that by combining electroporation and sonoporation together, significant improvement in molecule delivery efficiency could be achieved while maintaining high cell viability compared to electroporation or sonoporation alone. The microfluidic electro-sonoporation device presented here is, to the best of our knowledge, the first multi-modal cell poration device using simultaneous application of electric field and ultrasonic wave. This new multi-modal cell poration strategy and system is expected to have broad applications in delivery of small molecule therapeutics and ultimately in large molecule delivery such as gene transfection applications where high

  6. Development and calibration of a microfluidic biofilm growth cell with flow-templating and multi-modal characterization.

    PubMed

    Paquet-Mercier, Francois; Karas, Adnane; Safdar, Muhammad; Aznaveh, Nahid Babaei; Zarabadi, Mirpouyan; Greener, Jesse

    2014-01-01

    We report the development of a microfluidic flow-templating platform with multi-modal characterization for studies of biofilms and their precursor materials. A key feature is a special three inlet flow-template compartment, which confines and controls the location of biofilm growth against a template wall. Characterization compartments include Raman imaging to study the localization of the nutrient solutions, optical microscopy to quantify biofilm biomass and localization, and cyclic voltammetry for flow velocity measurements. Each compartment is tested and then utilized to make preliminary measurements.

  7. Comparison of sleep-wake classification using electroencephalogram and wrist-worn multi-modal sensor data.

    PubMed

    Sano, Akane; Picard, Rosalind W

    2014-01-01

    This paper presents the comparison of sleep-wake classification using electroencephalogram (EEG) and multi-modal data from a wrist wearable sensor. We collected physiological data while participants were in bed: EEG, skin conductance (SC), skin temperature (ST), and acceleration (ACC) data, from 15 college students, computed the features and compared the intra-/inter-subject classification results. As results, EEG features showed 83% while features from a wrist wearable sensor showed 74% and the combination of ACC and ST played more important roles in sleep/wake classification.

  8. An Automated Multi-Modal Serial Sectioning System for Characterization of Grain-Scale Microstructures in Engineering Materials

    NASA Astrophysics Data System (ADS)

    Uchic, Michael; Groeber, Michael; Shah, Megna; Callahan, Patrick; Shiveley, Adam; Scott, Michael; Chapman, Michael; Spowart, Jonathan

    This paper describes the development of a new serial sectioning system that has been designed to collect microstructural, crystallographic, and chemical information from volumes in excess of 1 mm3. The system integrates a robotic multi-platen mechanical polishing system with a modern SEM that enables the acquisition of multi-modal data—scanning electron images, EBSD and hyperspectral EDS map—at each section. Selected details of the system construction as well as an initial demonstration of the system capabilities are presented.

  9. Results from the commissioning of a multi-modal endoscope for ultrasound and time of flight PET

    SciTech Connect

    Bugalho, Ricardo

    2015-07-01

    The EndoTOFPET-US collaboration has developed a multi-modal imaging system combining Ultrasound with Time-of-Flight Positron Emission Tomography into an endoscopic imaging device. The objective of the project is to obtain a coincidence time resolution of about 200 ps FWHM and to achieve about 1 mm spatial resolution of the PET system, while integrating all the components in a very compact detector suitable for endoscopic use. This scanner aims to be exploited for diagnostic and surgical oncology, as well as being instrumental in the clinical test of new biomarkers especially targeted for prostate and pancreatic cancer. (authors)

  10. Search Analytics: Automated Learning, Analysis, and Search with Open Source

    NASA Astrophysics Data System (ADS)

    Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.

    2016-12-01

    The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82

  11. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    have been put in place for the projects: • Community website dedicated to Open Chemistry projects • Git source code repositories (Kitware, mirrored...A10-110 Proposal A2-4714 Kitware, Inc. The Gerrit code review system,[12] developed by Google as an open - source project for the Android operating...with nightly software build testing on all three major platforms for merged code and testing of proposed changes using CDash@Home[13] (an open - source

  12. Cross-platform learning: on the nature of children's learning from multiple media platforms.

    PubMed

    Fisch, Shalom M

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several recent studies to explore cross-platform learning (i.e., learning from combined use of multiple media platforms) and how such learning compares to learning from one medium. The paper discusses unique benefits of cross-platform learning, a theoretical mechanism to explain how these benefits might arise, and questions for future research in this emerging field.

  13. Cross-Platform JavaScript Coding: Shifting Sand Dunes and Shimmering Mirages.

    ERIC Educational Resources Information Center

    Merchant, David

    1999-01-01

    Most libraries don't have the resources to cross-platform and cross-version test all of their JavaScript coding. Many turn to WYSIWYG; however, WYSIWYG editors don't generally produce optimized coding. Web developers should: test their coding on at least one 3.0 browser, code by hand using tools to help speed that process up, and include a simple…

  14. Empirical comparison of cross-platform normalization methods for gene expression data.

    PubMed

    Rudy, Jason; Valafar, Faramarz

    2011-12-07

    Simultaneous measurement of gene expression on a genomic scale can be accomplished using microarray technology or by sequencing based methods. Researchers who perform high throughput gene expression assays often deposit their data in public databases, but heterogeneity of measurement platforms leads to challenges for the combination and comparison of data sets. Researchers wishing to perform cross platform normalization face two major obstacles. First, a choice must be made about which method or methods to employ. Nine are currently available, and no rigorous comparison exists. Second, software for the selected method must be obtained and incorporated into a data analysis workflow. Using two publicly available cross-platform testing data sets, cross-platform normalization methods are compared based on inter-platform concordance and on the consistency of gene lists obtained with transformed data. Scatter and ROC-like plots are produced and new statistics based on those plots are introduced to measure the effectiveness of each method. Bootstrapping is employed to obtain distributions for those statistics. The consistency of platform effects across studies is explored theoretically and with respect to the testing data sets. Our comparisons indicate that four methods, DWD, EB, GQ, and XPN, are generally effective, while the remaining methods do not adequately correct for platform effects. Of the four successful methods, XPN generally shows the highest inter-platform concordance when treatment groups are equally sized, while DWD is most robust to differently sized treatment groups and consistently shows the smallest loss in gene detection. We provide an R package, CONOR, capable of performing the nine cross-platform normalization methods considered. The package can be downloaded at http://alborz.sdsu.edu/conor and is available from CRAN.

  15. Empirical comparison of cross-platform normalization methods for gene expression data

    PubMed Central

    2011-01-01

    Background Simultaneous measurement of gene expression on a genomic scale can be accomplished using microarray technology or by sequencing based methods. Researchers who perform high throughput gene expression assays often deposit their data in public databases, but heterogeneity of measurement platforms leads to challenges for the combination and comparison of data sets. Researchers wishing to perform cross platform normalization face two major obstacles. First, a choice must be made about which method or methods to employ. Nine are currently available, and no rigorous comparison exists. Second, software for the selected method must be obtained and incorporated into a data analysis workflow. Results Using two publicly available cross-platform testing data sets, cross-platform normalization methods are compared based on inter-platform concordance and on the consistency of gene lists obtained with transformed data. Scatter and ROC-like plots are produced and new statistics based on those plots are introduced to measure the effectiveness of each method. Bootstrapping is employed to obtain distributions for those statistics. The consistency of platform effects across studies is explored theoretically and with respect to the testing data sets. Conclusions Our comparisons indicate that four methods, DWD, EB, GQ, and XPN, are generally effective, while the remaining methods do not adequately correct for platform effects. Of the four successful methods, XPN generally shows the highest inter-platform concordance when treatment groups are equally sized, while DWD is most robust to differently sized treatment groups and consistently shows the smallest loss in gene detection. We provide an R package, CONOR, capable of performing the nine cross-platform normalization methods considered. The package can be downloaded at http://alborz.sdsu.edu/conor and is available from CRAN. PMID:22151536

  16. Cross-Platform JavaScript Coding: Shifting Sand Dunes and Shimmering Mirages.

    ERIC Educational Resources Information Center

    Merchant, David

    1999-01-01

    Most libraries don't have the resources to cross-platform and cross-version test all of their JavaScript coding. Many turn to WYSIWYG; however, WYSIWYG editors don't generally produce optimized coding. Web developers should: test their coding on at least one 3.0 browser, code by hand using tools to help speed that process up, and include a simple…

  17. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  18. Evaluation of Game Engines for Cross-Platform Development of Mobile Serious Games for Health.

    PubMed

    Kleinschmidt, Carina; Haag, Martin

    2016-01-01

    Studies have shown that serious games for health can improve patient compliance and help to increase the quality of medical education. Due to a growing availability of mobile devices, especially the development of cross-platform mobile apps is helpful for improving healthcare. As the development can be highly time-consuming and expensive, an alternative development process is needed. Game engines are expected to simplify this process. Therefore, this article examines the question whether using game engines for cross-platform serious games for health can simplify the development compared to the development of a plain HTML5 app. At first, a systematic review of the literature was conducted in different databases (MEDLINE, ACM and IEEE). Afterwards three different game engines were chosen, evaluated in different categories and compared to the development of a HTML5 app. This was realized by implementing a prototypical application in the different engines and conducting a utility analysis. The evaluation shows that the Marmalade engine is the best choice for development in this scenario. Furthermore, it is obvious that the game engines have great benefits against plain HTML5 development as they provide components for graphics, physics, sounds, etc. The authors recommend to use the Marmalade Engine for a cross-platform mobile Serious Game for Health.

  19. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications.

    PubMed

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-04-04

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models.

  20. XMS: Cross-Platform Normalization Method for Multimodal Mass Spectrometric Tissue Profiling

    NASA Astrophysics Data System (ADS)

    Golf, Ottmar; Muirhead, Laura J.; Speller, Abigail; Balog, Júlia; Abbassi-Ghadi, Nima; Kumar, Sacheen; Mróz, Anna; Veselkov, Kirill; Takáts, Zoltán

    2015-01-01

    Here we present a proof of concept cross-platform normalization approach to convert raw mass spectra acquired by distinct desorption ionization methods and/or instrumental setups to cross-platform normalized analyte profiles. The initial step of the workflow is database driven peak annotation followed by summarization of peak intensities of different ions from the same molecule. The resulting compound-intensity spectra are adjusted to a method-independent intensity scale by using predetermined, compound-specific normalization factors. The method is based on the assumption that distinct MS-based platforms capture a similar set of chemical species in a biological sample, though these species may exhibit platform-specific molecular ion intensity distribution patterns. The method was validated on two sample sets of (1) porcine tissue analyzed by laser desorption ionization (LDI), desorption electrospray ionization (DESI), and rapid evaporative ionization mass spectrometric (REIMS) in combination with Fourier transformation-based mass spectrometry; and (2) healthy/cancerous colorectal tissue analyzed by DESI and REIMS with the latter being combined with time-of-flight mass spectrometry. We demonstrate the capacity of our method to reduce MS-platform specific variation resulting in (1) high inter-platform concordance coefficients of analyte intensities; (2) clear principal component based clustering of analyte profiles according to histological tissue types, irrespective of the used desorption ionization technique or mass spectrometer; and (3) accurate "blind" classification of histologic tissue types using cross-platform normalized analyte profiles.