Science.gov

Sample records for open-source cross-platform multi-modal

  1. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  2. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data.

  3. OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis

    NASA Astrophysics Data System (ADS)

    Grohmann, C. H.; Campanha, G. A.

    2010-12-01

    Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of

  4. A new, open-source, multi-modality digital breast phantom

    NASA Astrophysics Data System (ADS)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  5. An open-source and cross-platform framework for Brain Computer Interface-guided robotic arm control.

    PubMed

    Kubben, Pieter L; Pouratian, Nader

    2012-01-01

    Brain Computer Interfaces (BCIs) have focused on several areas, of which motor substitution has received particular interest. Whereas open-source BCI software is available to facilitate cost-effective collaboration between research groups, it mainly focuses on communication and computer control. We developed an open-source and cross-platform framework, which works with cost-effective equipment that allows researchers to enter the field of BCI-based motor substitution without major investments upfront. It is based on the C++ programming language and the Qt framework, and offers a separate class for custom MATLAB/Simulink scripts. It has been tested using a 14-channel wireless electroencephalography (EEG) device and a low-cost robotic arm that offers 5° of freedom. The software contains four modules to control the robotic arm, one of which receives input from the EEG device. Strengths, current limitations, and future developments will be discussed.

  6. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    PubMed

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  7. OpenChrom: a cross-platform open source software for the mass spectrometric analysis of chromatographic data

    PubMed Central

    2010-01-01

    Background Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. Results This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. Conclusions OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net. PMID:20673335

  8. GeolOkit 1.0: a new Open Source, Cross-Platform software for geological data visualization in Google Earth environment

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Antoine; Bastin, Christophe; Watlet, Arnaud

    2016-04-01

    GIS software suites are today's essential tools to gather and visualise geological data, to apply spatial and temporal analysis and in fine, to create and share interactive maps for further geosciences' investigations. For these purposes, we developed GeolOkit: an open-source, freeware and lightweight software, written in Python, a high-level, cross-platform programming language. GeolOkit software is accessible through a graphical user interface, designed to run in parallel with Google Earth. It is a super user-friendly toolbox that allows 'geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to plot these one into Google Earth environment using KML code. This workflow requires no need of any third party software, except Google Earth itself. GeolOkit comes with large number of geosciences' labels, symbols, colours and placemarks and may process : (i) multi-points data, (ii) contours via several interpolations methods, (iii) discrete planar and linear structural data in 2D or 3D supporting large range of structures input format, (iv) clustered stereonets and rose diagram, (v) drawn cross-sections as vertical sections, (vi) georeferenced maps and vectors, (vii) field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS. We are looking for you to discover all the functionalities of GeolOkit software. As this project is under development, we are definitely looking to discussions regarding your proper needs, your ideas and contributions to GeolOkit project.

  9. Multi-Modality Phantom Development

    SciTech Connect

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe both our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.

  10. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  11. Open Source Vision

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.

  12. Creating Open Source Conversation

    ERIC Educational Resources Information Center

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  13. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.

  14. Multi Modal Anticipation in Fuzzy Space

    NASA Astrophysics Data System (ADS)

    Asproth, Viveca; Holmberg, Stig C.; Hâkansson, Anita

    2006-06-01

    We are all stakeholders in the geographical space, which makes up our common living and activity space. This means that a careful, creative, and anticipatory planning, design, and management of that space will be of paramount importance for our sustained life on earth. Here it is shown that the quality of such planning could be significantly increased with help of a computer based modelling and simulation tool. Further, the design and implementation of such a tool ought to be guided by the conceptual integration of some core concepts like anticipation and retardation, multi modal system modelling, fuzzy space modelling, and multi actor interaction.

  15. Quantitative multi-modal NDT data analysis

    SciTech Connect

    Heideklang, René; Shokouhi, Parisa

    2014-02-18

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.

  16. Open-Source Colorimeter

    PubMed Central

    Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.

    2013-01-01

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032

  17. Open Source in Education

    ERIC Educational Resources Information Center

    Lakhan, Shaheen E.; Jhunjhunwala, Kavita

    2008-01-01

    Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…

  18. Evaluating Open Source Portals

    ERIC Educational Resources Information Center

    Goh, Dion; Luyt, Brendan; Chua, Alton; Yee, See-Yong; Poh, Kia-Ngoh; Ng, How-Yeu

    2008-01-01

    Portals have become indispensable for organizations of all types trying to establish themselves on the Web. Unfortunately, there have only been a few evaluative studies of portal software and even fewer of open source portal software. This study aims to add to the available literature in this important area by proposing and testing a checklist for…

  19. Open Source Software Development

    DTIC Science & Technology

    2011-01-01

    Agency’s XMM-Newton Observatory, the Sloan Digital Sky Survey, and others. These are three highly visible astrophysics research projects whose...In scientific fields like astrophysics that critically depend on software, open source is considered an essential precondition for research to...space are made, this in turn often leads to modification, extension, and new versions of the astronomical software in use that enable astrophysical

  20. Open-Source GIS

    SciTech Connect

    Vatsavai, Raju; Burk, Thomas E; Lime, Steve

    2012-01-01

    The components making up an Open Source GIS are explained in this chapter. A map server (Sect. 30.1) can broadly be defined as a software platform for dynamically generating spatially referenced digital map products. The University of Minnesota MapServer (UMN Map Server) is one such system. Its basic features are visualization, overlay, and query. Section 30.2 names and explains many of the geospatial open source libraries, such as GDAL and OGR. The other libraries are FDO, JTS, GEOS, JCS, MetaCRS, and GPSBabel. The application examples include derived GIS-software and data format conversions. Quantum GIS, its origin and its applications explained in detail in Sect. 30.3. The features include a rich GUI, attribute tables, vector symbols, labeling, editing functions, projections, georeferencing, GPS support, analysis, and Web Map Server functionality. Future developments will address mobile applications, 3-D, and multithreading. The origins of PostgreSQL are outlined and PostGIS discussed in detail in Sect. 30.4. It extends PostgreSQL by implementing the Simple Feature standard. Section 30.5 details the most important open source licenses such as the GPL, the LGPL, the MIT License, and the BSD License, as well as the role of the Creative Commons.

  1. Multi-modality molecular imaging for gastric cancer research

    NASA Astrophysics Data System (ADS)

    Liang, Jimin; Chen, Xueli; Liu, Junting; Hu, Hao; Qu, Xiaochao; Wang, Fu; Nie, Yongzhan

    2011-12-01

    Because of the ability of integrating the strengths of different modalities and providing fully integrated information, multi-modality molecular imaging techniques provide an excellent solution to detecting and diagnosing earlier cancer, which remains difficult to achieve by using the existing techniques. In this paper, we present an overview of our research efforts on the development of the optical imaging-centric multi-modality molecular imaging platform, including the development of the imaging system, reconstruction algorithms and preclinical biomedical applications. Primary biomedical results show that the developed optical imaging-centric multi-modality molecular imaging platform may provide great potential in the preclinical biomedical applications and future clinical translation.

  2. PR-PR: Cross-Platform Laboratory Automation System

    SciTech Connect

    Linshiz, G; Stawski, N; Goyal, G; Bi, CH; Poust, S; Sharma, M; Mutalik, V; Keasling, JD; Hillson, NJ

    2014-08-01

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  3. Cross platform development using Delphi and Kylix

    SciTech Connect

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  4. Multi-modality neuro-monitoring: conventional clinical trial design.

    PubMed

    Georgiadis, Alexandros L; Palesch, Yuko Y; Zygun, David; Hemphill, J Claude; Robertson, Claudia S; Leroux, Peter D; Suarez, Jose I

    2015-06-01

    Multi-modal monitoring has become an integral part of neurointensive care. However, our approach is at this time neither standardized nor backed by data from randomized controlled trials. The goal of the second Neurocritical Care Research Conference was to discuss research priorities in multi-modal monitoring, what research tools are available, as well as the latest advances in clinical trial design. This section of the meeting was focused on how such a trial should be designed so as to maximize yield and avoid mistakes of the past.

  5. Utilizing Multi-Modal Literacies in Middle Grades Science

    ERIC Educational Resources Information Center

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  6. An Open Source Simulation System

    NASA Technical Reports Server (NTRS)

    Slack, Thomas

    2005-01-01

    An investigation into the current state of the art of open source real time programming practices. This document includes what technologies are available, how easy is it to obtain, configure, and use them, and some performance measures done on the different systems. A matrix of vendors and their products is included as part of this investigation, but this is not an exhaustive list, and represents only a snapshot of time in a field that is changing rapidly. Specifically, there are three approaches investigated: 1. Completely open source on generic hardware, downloaded from the net. 2. Open source packaged by a vender and provided as free evaluation copy. 3. Proprietary hardware with pre-loaded proprietary source available software provided by the vender as for our evaluation.

  7. A bioinspired multi-modal flying and walking robot.

    PubMed

    Daler, Ludovic; Mintchev, Stefano; Stefanini, Cesare; Floreano, Dario

    2015-01-19

    With the aim to extend the versatility and adaptability of robots in complex environments, a novel multi-modal flying and walking robot is presented. The robot consists of a flying wing with adaptive morphology that can perform both long distance flight and walking in cluttered environments for local exploration. The robot's design is inspired by the common vampire bat Desmodus rotundus, which can perform aerial and terrestrial locomotion with limited trade-offs. Wings' adaptive morphology allows the robot to modify the shape of its body in order to increase its efficiency during terrestrial locomotion. Furthermore, aerial and terrestrial capabilities are powered by a single locomotor apparatus, therefore it reduces the total complexity and weight of this multi-modal robot.

  8. Combining Multi-modal Features for Social Media Analysis

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Spiros; Giannakidou, Eirini; Kompatsiaris, Ioannis; Patras, Ioannis; Vakali, Athena

    In this chapter we discuss methods for efficiently modeling the diverse information carried by social media. The problem is viewed as a multi-modal analysis process where specialized techniques are used to overcome the obstacles arising from the heterogeneity of data. Focusing at the optimal combination of low-level features (i.e., early fusion), we present a bio-inspired algorithm for feature selection that weights the features based on their appropriateness to represent a resource. Under the same objective of optimal feature combination we also examine the use of pLSA-based aspect models, as the means to define a latent semantic space where heterogeneous types of information can be effectively combined. Tagged images taken from social sites have been used in the characteristic scenarios of image clustering and retrieval, to demonstrate the benefits of multi-modal analysis in social media.

  9. A System Approach to Adaptive Multi-Modal Sensor Designs

    DTIC Science & Technology

    2010-02-01

    Email: rhody@cis.rit.edu Program Managers: Dr. Douglas Cochran <douglas.cochran@afosr.af.mil> Dr. Kitt C. Reinhardt <kitt.reinhardt...DEPARTMENT OF COMPUTER SCIENCE CONVENT AVE & 138TH ST SCHOOL OF ENGINEERING NEW YORK, NY 10031 Approved for public release...FA9550-08-1-0199 A System Approach to Adaptive Multi-Modal Sensor Designs 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  10. MINERVA - A Multi-Modal Radiation Treatment Planning System

    SciTech Connect

    D. E. Wessol; C. A. Wemple; D. W. Nigg; J. J. Cogliati; M. L. Milvich; C. Frederickson; M. Perkins; G. A. Harkin

    2004-10-01

    Recently, research efforts have begun to examine the combination of BNCT with external beam photon radiotherapy (Barth et al. 2004). In order to properly prepare treatment plans for patients being treated with combinations of radiation modalities, appropriate planning tools must be available. To facilitiate this, researchers at the Idaho National Engineering and Environmental Laboratory (INEEL)and Montana State University (MSU) have undertaken development of a fully multi-modal radiation treatment planning system.

  11. THE OPEN SOURCING OF EPANET

    EPA Science Inventory

    A proposal was made at the 2009 EWRI Congress in Kansas City, MO to establish an Open Source Project (OSP) for the widely used EPANET pipe network analysis program. This would be an ongoing collaborative effort among a group of geographically dispersed advisors and developers, wo...

  12. OpenSesame: an open-source, graphical experiment builder for the social sciences.

    PubMed

    Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-06-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  13. A multi-modal parcellation of human cerebral cortex.

    PubMed

    Glasser, Matthew F; Coalson, Timothy S; Robinson, Emma C; Hacker, Carl D; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-08-11

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.

  14. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  15. A multi-modal parcellation of human cerebral cortex

    PubMed Central

    Glasser, Matthew F; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-01-01

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease. PMID:27437579

  16. Multi-modal cockpit interface for improved airport surface operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  17. Plasmonic Gold Nanostars for Multi-Modality Sensing and Diagnostics

    PubMed Central

    Liu, Yang; Yuan, Hsiangkuo; Kersey, Farrell R.; Register, Janna K.; Parrott, Matthew C.; Vo-Dinh, Tuan

    2015-01-01

    Gold nanostars (AuNSs) are unique systems that can provide a novel multifunctional nanoplatform for molecular sensing and diagnostics. The plasmonic absorption band of AuNSs can be tuned to the near infrared spectral range, often referred to as the “tissue optical window”, where light exhibits minimal absorption and deep penetration in tissue. AuNSs have been applied for detecting disease biomarkers and for biomedical imaging using multi-modality methods including surface-enhanced Raman scattering (SERS), two-photon photoluminescence (TPL), magnetic resonance imaging (MRI), positron emission tomography (PET), and X-ray computer tomography (CT) imaging. In this paper, we provide an overview of the recent development of plasmonic AuNSs in our laboratory for biomedical applications and highlight their potential for future translational medicine as a multifunctional nanoplatform. PMID:25664431

  18. The origin of human multi-modal communication.

    PubMed

    Levinson, Stephen C; Holler, Judith

    2014-09-19

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system.

  19. Open Source: Everyone Becomes a Printer.

    ERIC Educational Resources Information Center

    Bruce, Bertram

    2000-01-01

    Discusses "open source": a method of distributing software in which programmers make available to all the actual text of their programs. Notes that this makes possible "open-source" writing in the same way that the printing press made possible "open-source" reading, enabling mass literacy. Examines implications of…

  20. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  1. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.

    2013-12-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed

  2. The HYPE Open Source Community

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Arheimer, Berit; Pers, Charlotta; Isberg, Kristina

    2013-04-01

    The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model (Lindström et al., 2010). It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. In Sweden, the model is used by water authorities to fulfil the Water Framework Directive and the Marine Strategy Framework Directive. It is used for characterization, forecasts, and scenario analyses. Model data can be downloaded for free from three different HYPE applications: Europe (www.smhi.se/e-hype), Baltic Sea basin (www.smhi.se/balt-hype), and Sweden (vattenweb.smhi.se) The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modelling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code will be delivered frequently. The main objective of the HYPE OSC is to provide public access to a state-of-the-art operational hydrological model and to encourage hydrologic expertise from different parts of the world to contribute to model improvement. HYPE OSC is open to everyone interested in hydrology, hydrological modelling and code development - e.g. scientists, authorities, and consultancies. The HYPE Open Source Community was initiated in November 2011 by a kick-off and workshop with 50 eager participants

  3. The origin of human multi-modal communication

    PubMed Central

    Levinson, Stephen C.; Holler, Judith

    2014-01-01

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system. PMID:25092670

  4. Free for All: Open Source Software

    ERIC Educational Resources Information Center

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  5. Open-source software: not quite endsville.

    PubMed

    Stahl, Matthew T

    2005-02-01

    Open-source software will never achieve ubiquity. There are environments in which it simply does not flourish. By its nature, open-source development requires free exchange of ideas, community involvement, and the efforts of talented and dedicated individuals. However, pressures can come from several sources that prevent this from happening. In addition, openness and complex licensing issues invite misuse and abuse. Care must be taken to avoid the pitfalls of open-source software.

  6. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  7. Planes, Trains, and Automobiles: Savings Potential of Utilizing Multi-Modal Transport for Depositioning Cargo in the CONUS

    DTIC Science & Technology

    2012-06-01

    Planes, Trains, and Automobiles : Savings Potential of Utilizing Multi-Modal Transport for...AFIT/IMO/ENS/12-07 Planes, Trains, and Automobiles : Savings Potential of Utilizing Multi-Modal Transport for Depositioning Cargo in the CONUS...Planes, Trains, and Automobiles : Savings Potential of Utilizing Multi-Modal Transport for Depositioning Cargo in the CONUS Timothy M

  8. 7 Questions to Ask Open Source Vendors

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  9. Open Source, Openness, and Higher Education

    ERIC Educational Resources Information Center

    Wiley, David

    2006-01-01

    In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…

  10. Deformable registration of multi-modal data including rigid structures

    SciTech Connect

    Huesman, Ronald H.; Klein, Gregory J.; Kimdon, Joey A.; Kuo, Chaincy; Majumdar, Sharmila

    2003-05-02

    Multi-modality imaging studies are becoming more widely utilized in the analysis of medical data. Anatomical data from CT and MRI are useful for analyzing or further processing functional data from techniques such as PET and SPECT. When data are not acquired simultaneously, even when these data are acquired on a dual-imaging device using the same bed, motion can occur that requires registration between the reconstructed image volumes. As the human torso can allow non-rigid motion, this type of motion should be estimated and corrected. We report a deformation registration technique that utilizes rigid registration for bony structures, while allowing elastic transformation of soft tissue to more accurately register the entire image volume. The technique is applied to the registration of CT and MR images of the lumbar spine. First a global rigid registration is performed to approximately align features. Bony structures are then segmented from the CT data using semi-automated process, and bounding boxes for each vertebra are established. Each CT subvolume is then individually registered to the MRI data using a piece-wise rigid registration algorithm and a mutual information image similarity measure. The resulting set of rigid transformations allows for accurate registration of the parts of the CT and MRI data representing the vertebrae, but not the adjacent soft tissue. To align the soft tissue, a smoothly-varying deformation is computed using a thin platespline(TPS) algorithm. The TPS technique requires a sparse set of landmarks that are to be brought into correspondence. These landmarks are automatically obtained from the segmented data using simple edge-detection techniques and random sampling from the edge candidates. A smoothness parameter is also included in the TPS formulation for characterization of the stiffness of the soft tissue. Estimation of an appropriate stiffness factor is obtained iteratively by using the mutual information cost function on the result

  11. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com.

  12. Open3DALIGN: an open-source software aimed at unsupervised ligand alignment.

    PubMed

    Tosco, Paolo; Balle, Thomas; Shiri, Fereshteh

    2011-08-01

    An open-source, cross-platform software aimed at conformer generation and unsupervised rigid-body molecular alignment is presented. Different algorithms have been implemented to perform single and multi-conformation superimpositions on one or more templates. Alignments can be accomplished by matching pharmacophores, heavy atoms or a combination of the two. All methods have been successfully validated on eight comprehensive datasets previously gathered by Sutherland and co-workers. High computational performance has been attained through efficient parallelization of the code. The unsupervised nature of the alignment algorithms, together with its scriptable interface, make Open3DALIGN an ideal component of high-throughput, automated cheminformatics workflows.

  13. Pre-Motor Response Time Benefits in Multi-Modal Displays

    DTIC Science & Technology

    2013-11-12

    equivalent visual representations of these same messages. Results indicated that there was a performance benefit for concurrent message presentations...public release; distribution is unlimited. Pre-Motor Response Time Benefits in Multi-Modal Displays The views, opinions and/or findings contained in this...Time Benefits in Multi-Modal Displays Report Title The present series of experiments testes the assimilation and efficacy of purpose-created tactile

  14. IGSTK: Framework and example application using an open source toolkit for image-guided surgery applications

    NASA Astrophysics Data System (ADS)

    Cheng, Peng; Zhang, Hui; Kim, Hee-su; Gary, Kevin; Blake, M. Brian; Gobbi, David; Aylward, Stephen; Jomier, Julien; Enquobahrie, Andinet; Avila, Rick; Ibanez, Luis; Cleary, Kevin

    2006-03-01

    Open source software has tremendous potential for improving the productivity of research labs and enabling the development of new medical applications. The Image-Guided Surgery Toolkit (IGSTK) is an open source software toolkit based on ITK, VTK, and FLTK, and uses the cross-platform tools CMAKE and DART to support common operating systems such as Linux, Windows, and MacOS. IGSTK integrates the basic components needed in surgical guidance applications and provides a common platform for fast prototyping and development of robust image-guided applications. This paper gives an overview of the IGSTK framework and current status of development followed by an example needle biopsy application to demonstrate how to develop an image-guided application using this toolkit.

  15. Multi-modal automatic montaging of adaptive optics retinal images

    PubMed Central

    Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.

    2016-01-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714

  16. Multi-modal automatic montaging of adaptive optics retinal images.

    PubMed

    Chen, Min; Cooper, Robert F; Han, Grace K; Gee, James; Brainard, David H; Morgan, Jessica I W

    2016-12-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download.

  17. The Efficient Utilization of Open Source Information

    SciTech Connect

    Baty, Samuel R.

    2016-08-11

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide key insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.

  18. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  19. The 2016 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083

  20. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  1. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  2. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  3. Cross-platform digital assessment forms for evaluating surgical skills.

    PubMed

    Andersen, Steven Arild Wuyts

    2015-01-01

    A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.

  4. [Multi-modal treatment of patients with multiple liver metastases caused by sigmoid cancer].

    PubMed

    Sawada, S; Nagata, K; Kato, T; Oshima, T; Yoshida, M; Kawa, S; Harima, K; Tanaka, Y; Nakamura, H

    1989-05-01

    A case of sigmoid cancer with multiple liver metastases (S2PON3 + H3) who was treated by multi-modal treatment was reported. The multi-modal treatment is including intra-arterial administration of anti-cancer drugs as a pre-surgery treatment, intra-arterial infusion chemotherapy lasting for three to five weeks (three times), hyperthermia treatment combined with intra-arterial administration of anti-cancer drugs and intra-arterial expandable metalic stent. The patients lived for 2 years and 4 months in good condition.

  5. There's No Need to Fear Open Source

    ERIC Educational Resources Information Center

    Balas, Janet

    2005-01-01

    The last time this author wrote about open source (OS) software was in last September's 2004 issue of Computers in Libraries, which was devoted to making the most of what you have and do-it-yourself solutions. After the column appeared, she received an e-mail from David Dorman of Index Data, who believed that she had done OS products a disservice…

  6. Of Birkenstocks and Wingtips: Open Source Licenses

    ERIC Educational Resources Information Center

    Gandel, Paul B.; Wheeler, Brad

    2005-01-01

    The notion of collaborating to create open source applications for higher education is rapidly gaining momentum. From course management systems to ERP financial systems, higher education institutions are working together to explore whether they can in fact build a better mousetrap. As Lois Brooks, of Stanford University, recently observed, the…

  7. Communal Resources in Open Source Software Development

    ERIC Educational Resources Information Center

    Spaeth, Sebastian; Haefliger, Stefan; von Krogh, Georg; Renzl, Birgit

    2008-01-01

    Introduction: Virtual communities play an important role in innovation. The paper focuses on the particular form of collective action in virtual communities underlying as Open Source software development projects. Method: Building on resource mobilization theory and private-collective innovation, we propose a theory of collective action in…

  8. Understanding the Requirements for Open Source Software

    DTIC Science & Technology

    2009-06-17

    fields like astrophysics that critically depend on software, open source is considered an essential precondition for research to proceed, and for...contributors or participants, new ideas, new career opportunities, and new research publications. 4.4. Condensing Discourse that Hardens and

  9. Open Source Software and the Intellectual Commons.

    ERIC Educational Resources Information Center

    Dorman, David

    2002-01-01

    Discusses the Open Source Software method of software development and its relationship to control over information content. Topics include digital library resources; reference services; preservation; the legal and economic status of information; technical standards; access to digital data; control of information use; and copyright and patent laws.…

  10. Open-Source Syringe Pump Library

    PubMed Central

    Wijnen, Bas; Hunt, Emily J.; Anzalone, Gerald C.; Pearce, Joshua M.

    2014-01-01

    This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications. PMID:25229451

  11. Hillmaker: an open source occupancy analysis tool.

    PubMed

    Isken, Mark W

    2005-12-01

    Managerial decision making problems in the healthcare industry often involve considerations of customer occupancy by time of day and day of week. We describe an occupancy analysis tool called Hillmaker which has been used in numerous healthcare operations studies. It is being released as a free and open source software project.

  12. Implementing Rakim: Open Source Chat Reference Software

    ERIC Educational Resources Information Center

    Caraway, Shawn; Payne, Susan

    2005-01-01

    This article describes the conception, implementation, and current status of Rakim open source software at Midlands Technical college in Columbia, SC. Midlands Technical College (MTC) is a 2-year school in Columbia, S.C. It has two large campuses and three smaller campuses. Although the library functions as a single unit, there are separate…

  13. SeqKit: A Cross-Platform and Ultrafast Toolkit for FASTA/Q File Manipulation.

    PubMed

    Shen, Wei; Le, Shuai; Li, Yan; Hu, Fuquan

    2016-01-01

    FASTA and FASTQ are basic and ubiquitous formats for storing nucleotide and protein sequences. Common manipulations of FASTA/Q file include converting, searching, filtering, deduplication, splitting, shuffling, and sampling. Existing tools only implement some of these manipulations, and not particularly efficiently, and some are only available for certain operating systems. Furthermore, the complicated installation process of required packages and running environments can render these programs less user friendly. This paper describes a cross-platform ultrafast comprehensive toolkit for FASTA/Q processing. SeqKit provides executable binary files for all major operating systems, including Windows, Linux, and Mac OSX, and can be directly used without any dependencies or pre-configurations. SeqKit demonstrates competitive performance in execution time and memory usage compared to similar tools. The efficiency and usability of SeqKit enable researchers to rapidly accomplish common FASTA/Q file manipulations. SeqKit is open source and available on Github at https://github.com/shenwei356/seqkit.

  14. Information content and analysis methods for Multi-Modal High-Throughput Biomedical Data

    NASA Astrophysics Data System (ADS)

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R.; Picone, Marco; Poli, Tito; Aliferis, Constantin F.; Statnikov, Alexander

    2014-03-01

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect ``multi-modal'' data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data.

  15. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  16. Graduate Student Perceptions of Multi-Modal Tablet Use in Academic Environments

    ERIC Educational Resources Information Center

    Bryant, Ezzard C., Jr.

    2016-01-01

    The purpose of this study was to explore graduate student perceptions of use and the ease of use of multi-modal tablets to access electronic course materials, and the perceived differences based on students' gender, age, college of enrollment, and previous experience. This study used the Unified Theory of Acceptance and Use of Technology to…

  17. Measurement of photosynthetic response to plant water stress using a multi-modal sensing system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant yield and productivity are significantly affected by abiotic stresses such as water or nutrient deficiency. An automated, timely detection of plant stress can mitigate stress development, thereby maximizing productivity and fruit quality. A multi-modal sensing system was developed and evalua...

  18. Students' Multi-Modal Re-Presentations of Scientific Knowledge and Creativity

    ERIC Educational Resources Information Center

    Koren, Yitzhak; Klavir, Rama; Gorodetsky, Malka

    2005-01-01

    The paper brings the results of a project that passed on to students the opportunity for re-presenting their acquired knowledge via the construction of multi-modal "learning resources". These "learning resources" substituted for lectures and books and became the official learning sources in the classroom. The rational for the…

  19. Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge

    ERIC Educational Resources Information Center

    Blown, Eric; Bryce, Tom G. K.

    2010-01-01

    The astronomy concepts of 345 young people were studied over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The purpose of the research was to search for evidence to clarify competing theories on "conceptual…

  20. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…

  1. Open Source Cable Models for EMI Simulations

    NASA Astrophysics Data System (ADS)

    Greedy, S.; Smartt, C.; Thomas, D. W. P.

    2016-05-01

    This paper describes the progress of work towards an Open Source software toolset suitable for developing Spice based multi-conductor cable models. The issues related to creating a transmission line model for implementation in Spice which include the frequency dependent properties of real cables are presented and the viability of spice cable models is demonstrated through application to a three conductor crosstalk model. Development of the techniques to include models of shielded cables and incident field excitation has been demonstrated.

  2. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  3. Open Source Approach to Urban Growth Simulation

    NASA Astrophysics Data System (ADS)

    Petrasova, A.; Petras, V.; Van Berkel, D.; Harmon, B. A.; Mitasova, H.; Meentemeyer, R. K.

    2016-06-01

    Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban - Regional Environment Simulation) - a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.

  4. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  5. From open source communications to knowledge

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Roberts, Colin; Rogers, David; Webberley, Will; Innes, Martin; Braines, Dave

    2016-05-01

    Rapid processing and exploitation of open source information, including social media sources, in order to shorten decision-making cycles, has emerged as an important issue in intelligence analysis in recent years. Through a series of case studies and natural experiments, focussed primarily upon policing and counter-terrorism scenarios, we have developed an approach to information foraging and framing to inform decision making, drawing upon open source intelligence, in particular Twitter, due to its real-time focus and frequent use as a carrier for links to other media. Our work uses a combination of natural language (NL) and controlled natural language (CNL) processing to support information collection from human sensors, linking and schematising of collected information, and the framing of situational pictures. We illustrate the approach through a series of vignettes, highlighting (1) how relatively lightweight and reusable knowledge models (schemas) can rapidly be developed to add context to collected social media data, (2) how information from open sources can be combined with reports from trusted observers, for corroboration or to identify con icting information; and (3) how the approach supports users operating at or near the tactical edge, to rapidly task information collection and inform decision-making. The approach is supported by bespoke software tools for social media analytics and knowledge management.

  6. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  7. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  8. Open Source GIS based integrated watershed management

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address

  9. OpenMS: a flexible open-source software platform for mass spectrometry data analysis.

    PubMed

    Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver

    2016-08-30

    High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.

  10. Sensorcaching: An Open-Source platform for citizen science and environmental monitoring

    NASA Astrophysics Data System (ADS)

    O'Keefe, Michael

    Sensorcaching is an Open-Source hardware and software project designed with several goals in mind. It allows for long-term environmental monitoring with low cost and low power-usage hardware. It encourages citizens to take an active role in the health of their community by providing the means to record and explore changes in their environment. And it provides opportunities for education about the necessity and techniques of studying our planet. Sensorcaching is a 3-part project, consisting of a hardware sensor, a cross-platform mobile application, and a web platform for data aggregation. Its evolution has been driven by the desire to allow for long-term environmental monitoring by laypeople without significant capital expenditures or onerous technical burdens.

  11. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    PubMed

    Dryden, Michael D M; Wheeler, Aaron R

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.

  12. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  13. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    SciTech Connect

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learning system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.

  14. NMRFx Processor: a cross-platform NMR data processing program.

    PubMed

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  15. Open source software to control Bioflo bioreactors.

    PubMed

    Burdge, David A; Libourel, Igor G L

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.

  16. The open-source neuroimaging research enterprise.

    PubMed

    Marcus, Daniel S; Archie, Kevin A; Olsen, Timothy R; Ramaratnam, Mohana

    2007-11-01

    While brain imaging in the clinical setting is largely a practice of looking at images, research neuroimaging is a quantitative and integrative enterprise. Images are run through complex batteries of processing and analysis routines to generate numeric measures of brain characteristics. Other measures potentially related to brain function - demographics, genetics, behavioral tests, neuropsychological tests - are key components of most research studies. The canonical scanner - PACS - viewing station axis used in clinical practice is therefore inadequate for supporting neuroimaging research. Here, we model the neuroimaging research enterprise as a workflow. The principal components of the workflow include data acquisition, data archiving, data processing and analysis, and data utilization. We also describe a set of open-source applications to support each step of the workflow and the transitions between these steps. These applications include DIGITAL IMAGING AND COMMUNICATIONS IN MEDICINE viewing and storage tools, the EXTENSIBLE NEUROIMAGING ARCHIVE TOOLKIT data archiving and exploration platform, and an engine for running processing/analysis pipelines. The overall picture presented is aimed to motivate open-source developers to identify key integration and communication points for interoperating with complimentary applications.

  17. Spatial rainfall data in open source environment

    NASA Astrophysics Data System (ADS)

    Schuurmans, Hanneke; Maarten Verbree, Jan; Leijnse, Hidde; van Heeringen, Klaas-Jan; Uijlenhoet, Remko; Bierkens, Marc; van de Giesen, Nick; Gooijer, Jan; van den Houten, Gert

    2013-04-01

    Since January 2013 The Netherlands have access to innovative high-quality rainfall data that is used for watermanagers. This product is innovative because of the following reasons. (i) The product is developed in a 'golden triangle' construction - corporation between government, business and research. (ii) Second the rainfall products are developed according to the open-source GPL license. The initiative comes from a group of water boards in the Netherlands that joined their forces to fund the development of a new rainfall product. Not only data from Dutch radar stations (as is currently done by the Dutch meteorological organization KNMI) is used but also data from radars in Germany and Belgium. After a radarcomposite is made, it is adjusted according to data from raingauges (ground truth). This results in 9 different rainfall products that give for each moment the best rainfall data. Specific knowledge is necessary to develop these kind of data. Therefore a pool of experts (KNMI, Deltares and 3 universities) participated in the development. The philosophy of the developers (being corporations) is that products like this should be developed in open source. This way knowledge is shared and the whole community is able to make suggestions for improvement. In our opinion this is the only way to make real progress in product development. Furthermore the financial resources of government organizations are optimized. More info (in Dutch): www.nationaleregenradar.nl

  18. Adaptation of Physiological and Cognitive Workload via Interactive Multi-modal Displays

    DTIC Science & Technology

    2014-05-28

    processing speed were due to concurrent tactile stimulation while improvements in processing accuracy were due to 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Multi-modal Displays, Soldier Performance, Tactile Displays, Threat Detection...opposed to purely the motor element. Further, we established that improvements in processing speed were due to concurrent tactile stimulation while

  19. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  20. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records

    PubMed Central

    Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176

  1. EVolution: an edge-based variational method for non-rigid multi-modal image registration

    NASA Astrophysics Data System (ADS)

    de Senneville, B. Denis; Zachiu, C.; Ries, M.; Moonen, C.

    2016-10-01

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  2. Evaluation of registration strategies for multi-modality images of rat brain slices

    NASA Astrophysics Data System (ADS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-05-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  3. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  4. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    DOE PAGES

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; ...

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fieldsmore » as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.« less

  5. Identification of multi-modal plasma responses to applied magnetic perturbations using the plasma reluctance

    SciTech Connect

    Logan, Nikolas C.; Paz-Soldan, Carlos; Park, Jong-Kyu; Nazikian, Raffi

    2016-05-03

    Using the plasma reluctance, the Ideal Perturbed Equilibrium Code is able to efficiently identify the structure of multi-modal magnetic plasma response measurements and the corresponding impact on plasma performance in the DIII-D tokamak. Recent experiments demonstrated that multiple kink modes of comparable amplitudes can be driven by applied nonaxisymmetric fields with toroidal mode number n = 2. This multi-modal response is in good agreement with ideal magnetohydrodynamic models, but detailed decompositions presented here show that the mode structures are not fully described by either the least stable modes or the resonant plasma response. This paper identifies the measured response fields as the first eigenmodes of the plasma reluctance, enabling clear diagnosis of the plasma modes and their impact on performance from external sensors. The reluctance shows, for example, how very stable modes compose a significant portion of the multi-modal plasma response field and that these stable modes drive significant resonant current. Finally, this work is an overview of the first experimental applications using the reluctance to interpret the measured response and relate it to multifaceted physics, aimed towards providing the foundation of understanding needed to optimize nonaxisymmetric fields for independent control of stability and transport.

  6. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  7. An open source simulator for water management

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Selby, Philip; Mohammed, Khaled; Khadem, Majed; Padula, Silvia; Harou, Julien; Rosenberg, David; Rheinheimer, David

    2015-04-01

    Descriptive modelling of water resource systems requires the representation of different aspects in one model: the physical system including hydrological inputs and engineered infrastructure, and human management, including social, economic and institutional behaviours and constraints. Although most water resource systems share some characteristics such as the ability to represent them as a network of nodes and links, geographical, institutional and other differences mean that invariably each water system functions in a unique way. A diverse group is developing an open source simulation framework which will allow model developers to build generalised water management models that are customised to the institutional, physical and economical components they are seeking to model. The framework will allow the simulation of complex individual and institutional behaviour required for the assessment of real-world resource systems. It supports the spatial and hierarchical structures commonly found in water resource systems. The individual infrastructures can be operated by different actors while policies are defined at a regional level by one or more institutional actors. The framework enables building multi-agent system simulators in which developers can define their own agent types and add their own decision making code. Developers using the framework have two main tasks: (i) Extend the core classes to represent the aspects of their particular system, and (ii) write model structure files. Both are done in Python. For task one, users must either write new decision making code for each class or link to an existing code base to provide functionality to each of these extension classes. The model structure file links these extension classes in a standardised way to the network topology. The framework will be open-source and written in Python and is to be available directly for download through standard installer packages. Many water management model developers are unfamiliar

  8. Query Health: standards-based, cross-platform population health surveillance

    PubMed Central

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371

  9. An Affordable Open-Source Turbidimeter

    PubMed Central

    Kelley, Christopher D.; Krolick, Alexander; Brunner, Logan; Burklund, Alison; Kahn, Daniel; Ball, William P.; Weber-Shirk, Monroe

    2014-01-01

    Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring. PMID:24759114

  10. Open-Source Instructional Materials in Astronomy

    NASA Astrophysics Data System (ADS)

    Robertson, T. H.

    2004-12-01

    Instructional materials are being developed in an open-source environment for introductory astronomy courses. These materials are being developed on, and will be available through, the LON-CAPA network accessed through the internet. Advantages of this system, which include materials sharing, free-software, search capabilities, context sensitive help and branching, metadata and on-line evaluation, will be discussed. Materials developed to date are limited primarily to personalized homework with a variety of question types for large (n = 100 student) classes at the Astronomy 101 and algebra-based astronomy levels. A progress report, as well as preliminary assessment data, will be provided on the scope of materials developed to date. Plans for future expansion will be presented. This work was funded in part by grants from Ball State University.

  11. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  12. The Open Source Teaching Project (OSTP): Research Note.

    ERIC Educational Resources Information Center

    Hirst, Tony

    The Open Source Teaching Project (OSTP) is an attempt to apply a variant of the successful open source software approach to the development of educational materials. Open source software is software licensed in such a way as to allow anyone the right to modify and use it. From such a simple premise, a whole industry has arisen, most notably in the…

  13. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    ERIC Educational Resources Information Center

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  14. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  15. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  16. An open source business model for malaria.

    PubMed

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  17. Open Source Hardware for DIY Environmental Sensing

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  18. Visi—A VTK- and QT-Based Open-Source Project for Scientific Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Chen, Cheng-Kai

    2009-03-01

    In this paper, we present an open-source project, Visi for high-dimensional engineering and scientific data visualization. Visi is with state-of-the-art interactive user interface and graphics kernels based upon Qt (a cross-platform GUI toolkit) and VTK (an object-oriented visualization library). For an initialization of Visi, a preliminary window will be activated by Qt, and the kernel of VTK is simultaneously embedded into the window, where the graphics resources are allocated. Representation of visualization is through an interactive interface so that the data will be rendered according to user's preference. The developed framework possesses high flexibility and extensibility for advanced functions (e.g., object combination, etc) and further applications. Application of Visi to data visualization in various fields, such as protein structure in bioinformatics, 3D semiconductor transistor, and interconnect of very-large scale integration (VLSI) layout is also illustrated to show the performance of Visi. The developed open-source project is available in our project website on the internet [1].

  19. Developing an Open Source Option for NASA Software

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Parks, John W. (Technical Monitor)

    2003-01-01

    We present arguments in favor of developing an Open Source option for NASA software; in particular we discuss how Open Source is compatible with NASA's mission. We compare and contrast several of the leading Open Source licenses, and propose one - the Mozilla license - for use by NASA. We also address some of the related issues for NASA with respect to Open Source. In particular, we discuss some of the elements in the External Release of NASA Software document (NPG 2210.1A) that will likely have to be changed in order to make Open Source a reality withm the agency.

  20. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration

    PubMed Central

    Dryden, Michael D. M.; Wheeler, Aaron R.

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100

  1. Differential network analysis from cross-platform gene expression data

    PubMed Central

    Zhang, Xiao-Fei; Ou-Yang, Le; Zhao, Xing-Ming; Yan, Hong

    2016-01-01

    Understanding how the structure of gene dependency network changes between two patient-specific groups is an important task for genomic research. Although many computational approaches have been proposed to undertake this task, most of them estimate correlation networks from group-specific gene expression data independently without considering the common structure shared between different groups. In addition, with the development of high-throughput technologies, we can collect gene expression profiles of same patients from multiple platforms. Therefore, inferring differential networks by considering cross-platform gene expression profiles will improve the reliability of network inference. We introduce a two dimensional joint graphical lasso (TDJGL) model to simultaneously estimate group-specific gene dependency networks from gene expression profiles collected from different platforms and infer differential networks. TDJGL can borrow strength across different patient groups and data platforms to improve the accuracy of estimated networks. Simulation studies demonstrate that TDJGL provides more accurate estimates of gene networks and differential networks than previous competing approaches. We apply TDJGL to the PI3K/AKT/mTOR pathway in ovarian tumors to build differential networks associated with platinum resistance. The hub genes of our inferred differential networks are significantly enriched with known platinum resistance-related genes and include potential platinum resistance-related genes. PMID:27677586

  2. The Open Source Snowpack modelling ecosystem

    NASA Astrophysics Data System (ADS)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  3. A graph-based approach for the retrieval of multi-modality medical images.

    PubMed

    Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan

    2014-02-01

    In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state

  4. Multi-modal imaging and cancer therapy using lanthanide oxide nanoparticles: current status and perspectives.

    PubMed

    Park, J Y; Chang, Y; Lee, G H

    2015-01-01

    Biomedical imaging is an essential tool for diagnosis and therapy of diseases such as cancers. It is likely true that medicine has developed with biomedical imaging methods. Sensitivity and resolution of biomedical imaging methods can be improved with imaging agents. Furthermore, it will be ideal if imaging agents could be also used as therapeutic agents. Therefore, one dose can be used for both diagnosis and therapy of diseases (i.e., theragnosis). This will simplify medical treatment of diseases, and will be also a benefit to patients. Mixed (Ln(1x)Ln(2y)O3, x + y = 2) or unmixed (Ln2O3) lanthanide (Ln) oxide nanoparticles (Ln = Eu, Gd, Dy, Tb, Ho, Er) are potential multi-modal imaging and cancer therapeutic agents. The lanthanides have a variety of magnetic and optical properties, useful for magnetic resonance imaging (MRI) and fluorescent imaging (FI), respectively. They also highly attenuate X-ray beam, useful for X-ray computed tomography (CT). In addition gadolinium-157 ((157)Gd) has the highest thermal neutron capture cross section among stable radionuclides, useful for gadolinium neutron capture therapy (GdNCT). Therefore, mixed or unmixed lanthanide oxide nanoparticles can be used for multi-modal imaging methods (i.e., MRI-FI, MRI-CT, CT-FI, and MRICT- FI) and cancer therapy (i.e., GdNCT). Since mixed or unmixed lanthanide oxide nanoparticles are single-phase and solid-state, they can be easily synthesized, and are compact and robust, which will be beneficial to biomedical applications. In this review physical properties of the lanthanides, synthesis, characterizations, multi-modal imagings, and cancer therapy of mixed and unmixed lanthanide oxide nanoparticles are discussed.

  5. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.

    PubMed

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-03-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.

  6. An open-source laser electronics suite

    NASA Astrophysics Data System (ADS)

    Pisenti, Neal C.; Reschovsky, Benjamin J.; Barker, Daniel S.; Restelli, Alessandro; Campbell, Gretchen K.

    2016-05-01

    We present an integrated set of open-source electronics for controlling external-cavity diode lasers and other instruments in the laboratory. The complete package includes a low-noise circuit for driving high-voltage piezoelectric actuators, an ultra-stable current controller based on the design of, and a high-performance, multi-channel temperature controller capable of driving thermo-electric coolers or resistive heaters. Each circuit (with the exception of the temperature controller) is designed to fit in a Eurocard rack equipped with a low-noise linear power supply capable of driving up to 5 A at +/- 15 V. A custom backplane allows signals to be shared between modules, and a digital communication bus makes the entire rack addressable by external control software over TCP/IP. The modular architecture makes it easy for additional circuits to be designed and integrated with existing electronics, providing a low-cost, customizable alternative to commercial systems without sacrificing performance.

  7. XNAT Central: Open sourcing imaging research data.

    PubMed

    Herrick, Rick; Horton, William; Olsen, Timothy; McKay, Michael; Archie, Kevin A; Marcus, Daniel S

    2016-01-01

    XNAT Central is a publicly accessible medical imaging data repository based on the XNAT open-source imaging informatics platform. It hosts a wide variety of research imaging data sets. The primary motivation for creating XNAT Central was to provide a central repository to host and provide access to a wide variety of neuroimaging data. In this capacity, XNAT Central hosts a number of data sets from research labs and investigative efforts from around the world, including the OASIS Brains imaging studies, the NUSDAST study of schizophrenia, and more. Over time, XNAT Central has expanded to include imaging data from many different fields of research, including oncology, orthopedics, cardiology, and animal studies, but continues to emphasize neuroimaging data. Through the use of XNAT's DICOM metadata extraction capabilities, XNAT Central provides a searchable repository of imaging data that can be referenced by groups, labs, or individuals working in many different areas of research. The future development of XNAT Central will be geared towards greater ease of use as a reference library of heterogeneous neuroimaging data and associated synthetic data. It will also become a tool for making data available supporting published research and academic articles.

  8. Multi-modal analysis for person type classification in news video

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Hauptmann, Alexander G.

    2004-12-01

    Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an im-portant topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a statistical learning approach to combine all the features for person type classifica-tion. Experiments conducted on ABC World News Tonight video have demonstrated the effectiveness of the approach, and the contributions of different categories of features have been compared.

  9. Multi-modal analysis for person type classification in news video

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Hauptmann, Alexander G.

    2005-01-01

    Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an im-portant topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a statistical learning approach to combine all the features for person type classifica-tion. Experiments conducted on ABC World News Tonight video have demonstrated the effectiveness of the approach, and the contributions of different categories of features have been compared.

  10. Continuous multi-modality brain imaging reveals modified neurovascular seizure response after intervention

    PubMed Central

    Ringuette, Dene; Jeffrey, Melanie A.; Dufour, Suzie; Carlen, Peter L.; Levi, Ofer

    2017-01-01

    We developed a multi-modal brain imaging system to investigate the relationship between blood flow, blood oxygenation/volume, intracellular calcium and electrographic activity during acute seizure-like events (SLEs), both before and after pharmacological intervention. Rising blood volume was highly specific to SLE-onset whereas blood flow was more correlated with all eletrographic activity. Intracellular calcium spiked between SLEs and at SLE-onset with oscillation during SLEs. Modified neurovascular and ionic SLE responses were observed after intervention and the interval between SLEs became shorter and more inconsistent. Comparison of artery and vein pulsatile flow suggest proximal interference and greater vascular leakage prior to intervention. PMID:28270990

  11. A low-power multi-modal body sensor network with application to epileptic seizure monitoring.

    PubMed

    Altini, Marco; Del Din, Silvia; Patel, Shyamal; Schachter, Steven; Penders, Julien; Bonato, Paolo

    2011-01-01

    Monitoring patients' physiological signals during their daily activities in the home environment is one of the challenge of the health care. New ultra-low-power wireless technologies could help to achieve this goal. In this paper we present a low-power, multi-modal, wearable sensor platform for the simultaneous recording of activity and physiological data. First we provide a description of the wearable sensor platform, and its characteristics with respect to power consumption. Second we present the preliminary results of the comparison between our sensors and a reference system, on healthy subjects, to test the reliability of the detected physiological (electrocardiogram and respiration) and electromyography signals.

  12. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  13. Multi-Modal Imaging with a Toolbox of Influenza A Reporter Viruses.

    PubMed

    Tran, Vy; Poole, Daniel S; Jeffery, Justin J; Sheahan, Timothy P; Creech, Donald; Yevtodiyenko, Aleksey; Peat, Andrew J; Francis, Kevin P; You, Shihyun; Mehle, Andrew

    2015-10-13

    Reporter viruses are useful probes for studying multiple stages of the viral life cycle. Here we describe an expanded toolbox of fluorescent and bioluminescent influenza A reporter viruses. The enhanced utility of these tools enabled kinetic studies of viral attachment, infection, and co-infection. Multi-modal bioluminescence and positron emission tomography-computed tomography (PET/CT) imaging of infected animals revealed that antiviral treatment reduced viral load, dissemination, and inflammation. These new technologies and applications will dramatically accelerate in vitro and in vivo influenza virus studies.

  14. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  15. An Open Source Business Model for Malaria

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, ‘closed’ publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more “open source” approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.’ President’s Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new

  16. A Multi-Modal Approach to Assessing Recovery in Youth Athletes Following Concussion

    PubMed Central

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-01-01

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one’s participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community. PMID:25285728

  17. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  18. Multi-modal contributions to detoxification of acute pharmacotoxicity by a triglyceride micro-emulsion

    PubMed Central

    Fettiplace, Michael R; Lis, Kinga; Ripper, Richard; Kowal, Katarzyna; Pichurko, Adrian; Vitello, Dominic; Rubinstein, Israel; Schwartz, David; Akpa, Belinda S; Weinberg, Guy

    2014-01-01

    Triglyceride micro-emulsions such as Intralipid® have been used to reverse cardiac toxicity induced by a number of drugs but reservations about their broad-spectrum applicability remain because of the poorly understood mechanism of action. Herein we report an integrated mechanism of reversal of bupivacaine toxicity that includes both transient drug scavenging and a cardiotonic effect that couple to accelerate movement of the toxin away from sites of toxicity. We thus propose a multi-modal therapeutic paradigm for colloidal bio-detoxification whereby a micro-emulsion both improves cardiac output and rapidly ferries the drug away from organs subject to toxicity. In vivo and in silico models of toxicity were combined to test the contribution of individual mechanisms and reveal the multi-modal role played by the cardiotonic and scavenging actions of the triglyceride suspension. These results suggest a method to predict which drug toxicities are most amenable to treatment and inform the design of next-generation therapeutics for drug overdose. PMID:25483426

  19. Multi-modal contributions to detoxification of acute pharmacotoxicity by a triglyceride micro-emulsion.

    PubMed

    Fettiplace, Michael R; Lis, Kinga; Ripper, Richard; Kowal, Katarzyna; Pichurko, Adrian; Vitello, Dominic; Rubinstein, Israel; Schwartz, David; Akpa, Belinda S; Weinberg, Guy

    2015-01-28

    Triglyceride micro-emulsions such as Intralipid® have been used to reverse cardiac toxicity induced by a number of drugs but reservations about their broad-spectrum applicability remain because of the poorly understood mechanism of action. Herein we report an integrated mechanism of reversal of bupivacaine toxicity that includes both transient drug scavenging and a cardiotonic effect that couple to accelerate movement of the toxin away from sites of toxicity. We thus propose a multi-modal therapeutic paradigm for colloidal bio-detoxification whereby a micro-emulsion both improves cardiac output and rapidly ferries the drug away from organs subject to toxicity. In vivo and in silico models of toxicity were combined to test the contribution of individual mechanisms and reveal the multi-modal role played by the cardiotonic and scavenging actions of the triglyceride suspension. These results suggest a method to predict which drug toxicities are most amenable to treatment and inform the design of next-generation therapeutics for drug overdose.

  20. Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Zu, Chen; Nie, Feiping; Shen, Dinggang; Wu, Guorong

    2017-01-01

    Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer’s disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.

  1. Study on electrodynamic sensor of multi-modality system for multiphase flow measurement

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Chen, Dixiang; Yang, Wuqiang

    2011-12-01

    Accurate measurement of multiphase flows, including gas/solids, gas/liquid, and liquid/liquid flows, is still challenging. In principle, electrical capacitance tomography (ECT) can be used to measure the concentration of solids in a gas/solids flow and the liquid (e.g., oil) fraction in a gas/liquid flow, if the liquid is non-conductive. Electrical resistance tomography (ERT) can be used to measure a gas/liquid flow, if the liquid is conductive. It has been attempted to use a dual-modality ECT/ERT system to measure both the concentration profile and the velocity profile by pixel-based cross correlation. However, this approach is not realistic because of the dynamic characteristics and the complexity of multiphase flows and the difficulties in determining the velocities by cross correlation. In this paper, the issues with dual modality ECT/ERT and the difficulties with pixel-based cross correlation will be discussed. A new adaptive multi-modality (ECT, ERT and electro-dynamic) sensor, which can be used to measure a gas/solids or gas/liquid flow, will be described. Especially, some details of the electrodynamic sensor of multi-modality system such as sensing electrodes optimum design, electrostatic charge amplifier, and signal processing will be discussed. Initial experimental results will be given.

  2. Study on electrodynamic sensor of multi-modality system for multiphase flow measurement.

    PubMed

    Deng, Xiang; Chen, Dixiang; Yang, Wuqiang

    2011-12-01

    Accurate measurement of multiphase flows, including gas/solids, gas/liquid, and liquid/liquid flows, is still challenging. In principle, electrical capacitance tomography (ECT) can be used to measure the concentration of solids in a gas/solids flow and the liquid (e.g., oil) fraction in a gas/liquid flow, if the liquid is non-conductive. Electrical resistance tomography (ERT) can be used to measure a gas/liquid flow, if the liquid is conductive. It has been attempted to use a dual-modality ECT/ERT system to measure both the concentration profile and the velocity profile by pixel-based cross correlation. However, this approach is not realistic because of the dynamic characteristics and the complexity of multiphase flows and the difficulties in determining the velocities by cross correlation. In this paper, the issues with dual modality ECT/ERT and the difficulties with pixel-based cross correlation will be discussed. A new adaptive multi-modality (ECT, ERT and electro-dynamic) sensor, which can be used to measure a gas/solids or gas/liquid flow, will be described. Especially, some details of the electrodynamic sensor of multi-modality system such as sensing electrodes optimum design, electrostatic charge amplifier, and signal processing will be discussed. Initial experimental results will be given.

  3. A Flamelet Modeling Approach for Multi-Modal Combustion with Inhomogeneous Inlets

    NASA Astrophysics Data System (ADS)

    Perry, Bruce A.; Mueller, Michael E.

    2016-11-01

    Large eddy simulations (LES) of turbulent combustion often employ models that make assumptions about the underlying flame structure. For example, flamelet models based on both premixed and nonpremixed flame structures have been implemented successfully in a variety of contexts. While previous flamelet models have been developed to account for multi-modal combustion or complex inlet conditions, none have been developed that can account for both effects simultaneously. Here, a new approach is presented that extends a nonpremixed, two-mixture fraction approach for compositionally inhomogeneous inlet conditions to partially premixed combustion. The approach uses the second mixture fraction to indicate the locally dominant combustion mode based on flammability considerations and switch between premixed and nonpremixed combustion models as appropriate. To assess this approach, LES predictions for this and other flamelet-based models are compared to data from a turbulent piloted jet burner with compositionally inhomogeneous inlets, which has been shown experimentally to exhibit multi-modal combustion. This work was supported by the NSF Graduate Research Fellowship Program under Grant DGE 1148900.

  4. A multi-modal approach to assessing recovery in youth athletes following concussion.

    PubMed

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-09-25

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one's participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community.

  5. Multi-Modal Use of a Socially Directed Call in Bonobos

    PubMed Central

    Genty, Emilie; Clay, Zanna; Hobaiter, Catherine; Zuberbühler, Klaus

    2014-01-01

    ‘Contest hoots’ are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus). These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, ‘contest hoots’ were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction. PMID:24454745

  6. Multi-modal signal acquisition using a synchronized wireless body sensor network in geriatric patients.

    PubMed

    Pflugradt, Maik; Mann, Steffen; Tigges, Timo; Görnig, Matthias; Orglmeister, Reinhold

    2016-02-01

    Wearable home-monitoring devices acquiring various biosignals such as the electrocardiogram, photoplethysmogram, electromyogram, respirational activity and movements have become popular in many fields of research, medical diagnostics and commercial applications. Especially ambulatory settings introduce still unsolved challenges to the development of sensor hardware and smart signal processing approaches. This work gives a detailed insight into a novel wireless body sensor network and addresses critical aspects such as signal quality, synchronicity among multiple devices as well as the system's overall capabilities and limitations in cardiovascular monitoring. An early sign of typical cardiovascular diseases is often shown by disturbed autonomic regulations such as orthostatic intolerance. In that context, blood pressure measurements play an important role to observe abnormalities like hypo- or hypertensions. Non-invasive and unobtrusive blood pressure monitoring still poses a significant challenge, promoting alternative approaches including pulse wave velocity considerations. In the scope of this work, the presented hardware is applied to demonstrate the continuous extraction of multi modal parameters like pulse arrival time within a preliminary clinical study. A Schellong test to diagnose orthostatic hypotension which is typically based on blood pressure cuff measurements has been conducted, serving as an application that might significantly benefit from novel multi-modal measurement principles. It is further shown that the system's synchronicity is as precise as 30 μs and that the integrated analog preprocessing circuits and additional accelerometer data provide significant advantages in ambulatory measurement environments.

  7. Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.

    PubMed

    Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B

    2015-02-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available.

  8. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival

    PubMed Central

    Phan, John H.; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D.

    2016-01-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance. PMID:27493999

  9. In vivo monitoring of structural and mechanical changes of tissue scaffolds by multi-modality imaging

    PubMed Central

    Park, Dae Woo; Ye, Sang-Ho; Jiang, Hong Bin; Dutta, Debaditya; Nonaka, Kazuhiro; Wagner, William R.; Kim, Kang

    2014-01-01

    Degradable tissue scaffolds are implanted to serve a mechanical role while healing processes occur and putatively assume the physiological load as the scaffold degrades. Mechanical failure during this period can be unpredictable as monitoring of structural degradation and mechanical strength changes at the implant site is not readily achieved in vivo, and non-invasively. To address this need, a multi-modality approach using ultrasound shear wave imaging (USWI) and photoacoustic imaging (PAI) for both mechanical and structural assessment in vivo was demonstrated with degradable poly(ester urethane)urea (PEUU) and polydioxanone (PDO) scaffolds. The fibrous scaffolds were fabricated with wet electrospinning, dyed with indocyanine green (ICG) for optical contrast in PAI, and implanted in the abdominal wall of 36 rats. The scaffolds were monitored monthly using USWI and PAI and were extracted at 0, 4, 8 and 12 wk for mechanical and histological assessment. The change in shear modulus of the constructs in vivo obtained by USWI correlated with the change in average Young's modulus of the constructs ex vivo obtained by compression measurements. The PEUU and PDO scaffolds exhibited distinctly different degradation rates and average PAI signal intensity. The distribution of PAI signal intensity also corresponded well to the remaining scaffolds as seen in explant histology. This evidence using a small animal abdominal wall repair model demonstrates that multi-modality imaging of USWI and PAI may allow tissue engineers to noninvasively evaluate concurrent mechanical stiffness and structural changes of tissue constructs in vivo for a variety of applications. PMID:24951048

  10. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  11. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  12. The case for open-source software in drug discovery.

    PubMed

    DeLano, Warren L

    2005-02-01

    Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.

  13. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires

  14. OpenADR Open Source Toolkit: Developing Open Source Software for the Smart Grid

    SciTech Connect

    McParland, Charles

    2011-02-01

    Demand response (DR) is becoming an increasingly important part of power grid planning and operation. The advent of the Smart Grid, which mandates its use, further motivates selection and development of suitable software protocols to enable DR functionality. The OpenADR protocol has been developed and is being standardized to serve this goal. We believe that the development of a distributable, open source implementation of OpenADR will benefit this effort and motivate critical evaluation of its capabilities, by the wider community, for providing wide-scale DR services

  15. Open-Source as a strategy for operational software - the case of Enki

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has

  16. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data.

    PubMed

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter; Kjaer, Troels W; Sams, Thomas; Sorensen, Helge B D

    2012-08-01

    The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi-modal detection system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data.

  17. Automatic quantification of multi-modal rigid registration accuracy using feature detectors.

    PubMed

    Hauler, F; Furtado, H; Jurisic, M; Polanec, S H; Spick, C; Laprie, A; Nestle, U; Sabatini, U; Birkfellner, W

    2016-07-21

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error-mTRE-based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  18. Automatic quantification of multi-modal rigid registration accuracy using feature detectors

    NASA Astrophysics Data System (ADS)

    Hauler, F.; Furtado, H.; Jurisic, M.; Polanec, S. H.; Spick, C.; Laprie, A.; Nestle, U.; Sabatini, U.; Birkfellner, W.

    2016-07-01

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error—mTRE—based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  19. Open Source Initiative Powers Real-Time Data Streams

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  20. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    ERIC Educational Resources Information Center

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  1. Can open-source R&D reinvigorate drug research?

    PubMed

    Munos, Bernard

    2006-09-01

    The low number of novel therapeutics approved by the US FDA in recent years continues to cause great concern about productivity and declining innovation. Can open-source drug research and development, using principles pioneered by the highly successful open-source software movement, help revive the industry?

  2. Getting Open Source Software into Schools: Strategies and Challenges

    ERIC Educational Resources Information Center

    Hepburn, Gary; Buley, Jan

    2006-01-01

    In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…

  3. Open Source as Appropriate Technology for Global Education

    ERIC Educational Resources Information Center

    Carmichael, Patrick; Honour, Leslie

    2002-01-01

    Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing…

  4. Open Source Course Management Systems: A Case Study

    ERIC Educational Resources Information Center

    Remy, Eric

    2005-01-01

    In Fall 2003, Randolph-Macon Woman's College rolled out Claroline, an Open Source course management system for all the classes on campus. This document will cover some background on both Open Source in general and course management systems in specific, discuss technical challenges in the introduction and integration of the system and give some…

  5. Open Source for Knowledge and Learning Management: Strategies beyond Tools

    ERIC Educational Resources Information Center

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2007-01-01

    In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies.…

  6. Integrating an Automatic Judge into an Open Source LMS

    ERIC Educational Resources Information Center

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  7. Open-Source Data and the Study of Homicide.

    PubMed

    Parkin, William S; Gruenewald, Jeff

    2015-07-20

    To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events.

  8. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  9. Multi-modal hard x-ray imaging with a laboratory source using selective reflection from a mirror.

    PubMed

    Pelliccia, Daniele; Paganin, David M

    2014-04-01

    Multi-modal hard x-ray imaging sensitive to absorption, refraction, phase and scattering contrast is demonstrated using a simple setup implemented with a laboratory source. The method is based on selective reflection at the edge of a mirror, aligned to partially reflect a pencil x-ray beam after its interaction with a sample. Quantitative scattering contrast from a test sample is experimentally demonstrated using this method. Multi-modal imaging of a house fly (Musca domestica) is shown as proof of principle of the technique for biological samples.

  10. Learning by Doing: How to Develop a Cross-Platform Web App

    ERIC Educational Resources Information Center

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  11. Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.

    2014-03-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.

  12. Multi-Modality fiducial marker for validation of registration of medical images with histology

    NASA Astrophysics Data System (ADS)

    Shojaii, Rushin; Martel, Anne L.

    2010-03-01

    A multi-modality fiducial marker is presented in this work, which can be used for validating the correlation of histology images with medical images. This marker can also be used for landmark-based image registration. Seven different fiducial markers including a catheter, spaghetti, black spaghetti, cuttlefish ink, and liquid iron are implanted in a mouse specimen and then investigated based on visibility, localization, size, and stability. The black spaghetti and the mixture of cuttlefish ink and flour are shown to be the most suitable markers. Based on the size of the markers, black spaghetti is more suitable for big specimens and the mixture of the cuttlefish ink, flour, and water injected in a catheter is more suitable for small specimens such as mouse tumours. These markers are visible on medical images and also detectable on histology and optical images of the tissue blocks. The main component in these agents which enhances the contrast is iron.

  13. Incidental acquisition of foreign language vocabulary through brief multi-modal exposure.

    PubMed

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2013-01-01

    First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed.

  14. Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome

    PubMed Central

    Choudhry, Netan; Rao, Rajesh C.

    2015-01-01

    Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849

  15. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  16. Automatic trajectory planning of DBS neurosurgery from multi-modal MRI datasets.

    PubMed

    Bériault, Silvain; Al Subaie, Fahd; Mok, Kelvin; Sadikot, Abbas F; Pike, G Bruce

    2011-01-01

    We propose an automated method for preoperative trajectory planning of deep brain stimulation image-guided neurosurgery. Our framework integrates multi-modal MRI analysis (T1w, SWI, TOF-MRA) to determine an optimal trajectory to DBS targets (subthalamic nuclei and globus pallidus interna) while avoiding critical brain structures for prevention of hemorrhages, loss of function and other complications. Results show that our method is well suited to aggregate many surgical constraints and allows the analysis of thousands of trajectories in less than 1/10th of the time for manual planning. Finally, a qualitative evaluation of computed trajectories resulted in the identification of potential new constraints, which are not addressed in the current literature, to better mimic the decision-making of the neurosurgeon during DBS planning.

  17. Multi-modal vibration energy harvesting approach based on nonlinear oscillator arrays under magnetic levitation

    NASA Astrophysics Data System (ADS)

    Abed, I.; Kacem, N.; Bouhaddi, N.; Bouazizi, M. L.

    2016-02-01

    We propose a multi-modal vibration energy harvesting approach based on arrays of coupled levitated magnets. The equations of motion which include the magnetic nonlinearity and the electromagnetic damping are solved using the harmonic balance method coupled with the asymptotic numerical method. A multi-objective optimization procedure is introduced and performed using a non-dominated sorting genetic algorithm for the cases of small magnet arrays in order to select the optimal solutions in term of performances by bringing the eigenmodes close to each other in terms of frequencies and amplitudes. Thanks to the nonlinear coupling and the modal interactions even for only three coupled magnets, the proposed method enable harvesting the vibration energy in the operating frequency range of 4.6-14.5 Hz, with a bandwidth of 190% and a normalized power of 20.2 {mW} {{cm}}-3 {{{g}}}-2.

  18. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  19. Multi-modal digital holographic microscopy for wide-field fluorescence and 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Quan, Xiangyu; Xia, Peng; Matoba, Osamu; Nitta, Koichi; Awatsuji, Yasuhiro

    2016-03-01

    Multi-modal digital holographic microscopy is a combination of epifluorescence microscopy and digital holographic microscopy, the main function of which is to obtain images from fluorescence intensity and quantified phase contrasts, simultaneously. The proposed system is mostly beneficial to biological studies, with the reason that often the studies are depending on fluorescent labeling techniques to detect certain intracellular molecules, while phase information reflecting properties of unstained transparent elements. This paper is presenting our latest researches on applications such as randomly moving micro-fluorescent beads and living cells of Physcomitrella patens. The experiments are succeeded on obtaining a succession of wide-field fluorescent images and holograms from micro-beads, and different depths focusing is realized via numerical reconstruction. Living cells of Physcomitrella patens are recorded in the static manner, the reconstruction distance indicates thickness of cellular structure. These results are implementing practical applications toward many biomedical science researches.

  20. Multi-modal miniaturized microscope: successful merger of optical, MEMS, and electronic technologies

    NASA Astrophysics Data System (ADS)

    Tkaczyk, Tomasz S.; Rogers, Jeremy D.; Rahman, Mohammed; Christenson, Todd C.; Gaalema, Stephen; Dereniak, Eustace L.; Richards-Kortum, Rebecca; Descour, Michael R.

    2005-12-01

    The multi-modal miniature microscope (4M) device for early cancer detection is based on micro-optical table (MOT) platform which accommodates on a chip: optical, micro-mechanical, and electronic components. The MOT is a zeroalignment optical-system concept developed for a wide variety of opto-mechanical instruments. In practical terms this concept translates into assembly errors that are smaller than the tolerances on the performance of the optical system. This paper discusses all major system elements: optical system, custom high speed CMOS detector and comb drive actuator. It also points to mutual relations between different technologies. The hybrid sol-gel lenses, their fabrication and assembling techniques, optical system parameters, and various operation modes are also discussed. A particularly interesting mode is a structured illumination technique that delivers confocal-imaging capabilities and may be used for optical sectioning. Structured illumination is produced with LIGA fabricated actuator scanning in resonance and reconstructed using sine approximation algorithm.

  1. Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Zhang, Minliang; Chen, Qian

    2016-05-01

    We demonstrate a simple and cost-effective programmable aperture microscope to realize multi-modal computational imaging by integrating a programmable liquid crystal display (LCD) into a conventional wide-field microscope. The LCD selectively modulates the light distribution at the rear aperture of the microscope objective, allowing numerous imaging modalities, such as bright field, dark field, differential phase contrast, quantitative phase imaging, multi-perspective imaging, and full resolution light field imaging to be achieved and switched rapidly in the same setup, without requiring specialized hardwares and any moving parts. We experimentally demonstrate the success of our method by imaging unstained cheek cells, profiling microlens array, and changing perspective views of thick biological specimens. The post-exposure refocusing of a butterfly mouthpart and RFP-labeled dicot stem cross-section is also presented to demonstrate the full resolution light field imaging capability of our system for both translucent and fluorescent specimens.

  2. A multi-modal approach for activity classification and fall detection

    NASA Astrophysics Data System (ADS)

    Castillo, José Carlos; Carneiro, Davide; Serrano-Cuerda, Juan; Novais, Paulo; Fernández-Caballero, Antonio; Neves, José

    2014-04-01

    The society is changing towards a new paradigm in which an increasing number of old adults live alone. In parallel, the incidence of conditions that affect mobility and independence is also rising as a consequence of a longer life expectancy. In this paper, the specific problem of falls of old adults is addressed by devising a technological solution for monitoring these users. Video cameras, accelerometers and GPS sensors are combined in a multi-modal approach to monitor humans inside and outside the domestic environment. Machine learning techniques are used to detect falls and classify activities from accelerometer data. Video feeds and GPS are used to provide location inside and outside the domestic environment. It results in a monitoring solution that does not imply the confinement of the users to a closed environment.

  3. A Distance Measure Comparison to Improve Crowding in Multi-Modal Problems.

    SciTech Connect

    D. Todd VOllmer; Terence Soule; Milos Manic

    2010-08-01

    Solving multi-modal optimization problems are of interest to researchers solving real world problems in areas such as control systems and power engineering tasks. Extensions of simple Genetic Algorithms, particularly types of crowding, have been developed to help solve these types of problems. This paper examines the performance of two distance measures, Mahalanobis and Euclidean, exercised in the processing of two different crowding type implementations against five minimization functions. Within the context of the experiments, empirical evidence shows that the statistical based Mahalanobis distance measure when used in Deterministic Crowding produces equivalent results to a Euclidean measure. In the case of Restricted Tournament selection, use of Mahalanobis found on average 40% more of the global optimum, maintained a 35% higher peak count and produced an average final best fitness value that is 3 times better.

  4. Control of an axisymmetric turbulent jet by multi-modal excitation

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Rice, Edward J.; Reshotko, Eli

    1991-01-01

    Experimental measurements of naturally occurring instability modes in the axisymmetric shear layer of high Reynolds number turbulent jet are presented. The region up to the end of the potential core was dominated by the axisymmetric mode. The azimuthal modes dominated only downstream of the potential core region. The energy content of the higher order modes (m is greater than 1) was significantly lower than that of the axisymmeteric and m = + or - 1 modes. Under optimum conditions, two-frequency excitation (both at m = 0) was more effective than single frequency excitation (at m = 0) for jet spreading enhancement. An extended region of the jet was controlled by forcing combinations of both axisymmetric (m = 0) and helical modes (m = + or - 1). Higher spreading rates were obtained when multi-modal forcing was applied.

  5. Control of an axisymmetric turbulent jet by multi-modal excitation

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Rice, Edward J.; Reshotko, Eli

    1991-01-01

    Experimental measurements of naturally occurring instability modes in the axisymmetric shear layer of high Reynolds number turbulent jet are presented. The region up to the end of the potential core was dominated by the axisymmetric mode. The azimuthal modes dominated only downstream of the potential core region. The energy content of the higher order modes (m is greater than 1) was significantly lower than that of the axisymmetric and m = + or - 1 modes. Under optimum conditions, two-frequency excitation (both at m = 0) was more effective than single frequency excitation (at m = 0) for jet spreading enhancement. An extended region of the jet was controlled by forcing combinations of both axisymmetric (m = 0) and helical modes (m = + or - 1). Higher spreading rates were obtained when multi-modal forcing was applied.

  6. Dynamic Graph Analytic Framework (DYGRAF): greater situation awareness through layered multi-modal network analysis

    NASA Astrophysics Data System (ADS)

    Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.

    2012-06-01

    Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.

  7. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.

    PubMed

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.

  8. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    PubMed

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS.

  9. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION

    PubMed Central

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    2016-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement. PMID:27668065

  10. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  11. Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)

    SciTech Connect

    Aiken, R.J.; Carlson, R.A.; Foster, I.T.

    1997-01-01

    The research and education (R&E) community requires persistent and scaleable network infrastructure to concurrently support production and research applications as well as network research. In the past, the R&E community has relied on supporting parallel network and end-node infrastructures, which can be very expensive and inefficient for network service managers and application programmers. The grand challenge in networking is to provide support for multiple, concurrent, multi-layer views of the network for the applications and the network researchers, and to satisfy the sometimes conflicting requirements of both while ensuring one type of traffic does not adversely affect the other. Internet and telecommunications service providers will also benefit from a multi-modal infrastructure, which can provide smoother transitions to new technologies and allow for testing of these technologies with real user traffic while they are still in the pre-production mode. The authors proposed approach requires the use of as much of the same network and end system infrastructure as possible to reduce the costs needed to support both classes of activities (i.e., production and research). Breaking the infrastructure into segments and objects (e.g., routers, switches, multiplexors, circuits, paths, etc.) gives the capability to dynamically construct and configure the virtual active networks to address these requirements. These capabilities must be supported at the campus, regional, and wide-area network levels to allow for collaboration by geographically dispersed groups. The Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet) described in this report is an initial architecture and framework designed to identify and support the capabilities needed for the proposed combined infrastructure and to address related research issues.

  12. NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences.

    PubMed

    Riva, Giuseppe; Gaggioli, Andrea; Villani, Daniela; Preziosa, Alessandra; Morganti, Francesca; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca

    2007-01-01

    In the past decade, the use of virtual reality for clinical and research applications has become more widespread. However, the diffusion of this approach is still limited by three main issues: poor usability, lack of technical expertise among clinical professionals, and high costs. To address these challenges, we introduce NeuroVR (http://www.neurovr.org--http://www.neurotiv.org), a cost-free virtual reality platform based on open-source software, that allows non-expert users to adapt the content of a pre-designed virtual environment to meet the specific needs of the clinical or experimental setting. Using the NeuroVR Editor, the user can choose the appropriate psychological stimuli/stressors from a database of objects (both 2D and 3D) and videos, and easily place them into the virtual environment. The edited scene can then be visualized in the NeuroVR Player using either immersive or non-immersive displays. Currently, the NeuroVR library includes different virtual scenes (apartment, office, square, supermarket, park, classroom, etc.), covering two of the most studied clinical applications of VR: specific phobias and eating disorders. The NeuroVR Editor is based on Blender (http://www.blender.org), the open source, cross-platform suite of tools for 3D creation, and is available as a completely free resource. An interesting feature of the NeuroVR Editor is the possibility to add new objects to the database. This feature allows the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, i.e., by using photos or movies of objects/people that are part of the patient's daily life, thereby improving the efficacy of the exposure. The NeuroVR platform runs on standard personal computers with Microsoft Windows; the only requirement for the hardware is related to the graphics card, which must support OpenGL.

  13. Open Genetic Code: on open source in the life sciences.

    PubMed

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  14. Real Space Multigrid (RMG) Open Source Software Suite for Multi-Petaflops Electronic Structure Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry; Li, Yan

    RMG is a cross platform open source package for ab initio electronic structure calculations that uses real-space grids, multigrid pre-conditioning, and subspace diagonalization to solve the Kohn-Sham equations. The code has been successfully used for a wide range of problems ranging from complex bulk materials to multifunctional electronic devices and biological systems. RMG makes efficient use of GPU accelerators, if present, but does not require them. Recent work has extended GPU support to systems with multiple GPU's per computational node, as well as optimized both CPU and GPU memory usage to enable large problem sizes, which are no longer limited by the memory of the GPU board. Additional enhancements include increased portability, scalability and performance. New versions of the code are regularly released at sourceforge.net/projects/rmgdft/. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms.

  15. Interactive multicentre teleconferences using open source software in a team of thoracic surgeons.

    PubMed

    Ito, Kazuhiro; Shimada, Junichi; Katoh, Daishiro; Nishimura, Motohiro; Yanada, Masashi; Okada, Satoru; Ishihara, Shunta; Ichise, Kaori

    2012-12-01

    Real-time consultation between a team of thoracic surgeons is important for the management of difficult cases. We established a system for interactive teleconsultation between multiple sites, based on open-source software. The graphical desktop-sharing system VNC (virtual network computing) was used for remotely controlling another computer. An image-processing package (OsiriX) was installed on the server to share the medical images. We set up a voice communication system using Voice Chatter, a free, cross-platform voice communication application. Four hospitals participated in the trials. One was connected by gigabit ethernet, one by WiMAX and one by ADSL. Surgeons at three of the sites found that it was comfortable to view images and consult with each other using the teleconferencing system. However, it was not comfortable using the client that connected via WiMAX, because of dropped frames. Apart from the WiMAX connection, the VNC-based screen-sharing system transferred the clinical images efficiently and in real time. We found the screen-sharing software VNC to be a good application for medical image interpretation, especially for a team of thoracic surgeons using multislice CT scans.

  16. Effective Beginning Handwriting Instruction: Multi-Modal, Consistent Format for 2 Years, and Linked to Spelling and Composing

    ERIC Educational Resources Information Center

    Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.

    2017-01-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N = 16 first graders,…

  17. Multi-Yield Radio Frequency Countermeasures Investigations and Development (MYRIAD) Task Order 006: Integrated Multi-Modal RF Sensing

    DTIC Science & Technology

    2012-08-01

    Multi-Modal RF Sensing Mark L. Brockman Dynetics , Inc. Steven Kay and Quan Ding University of Rhode Island Sean M. O’Rourke and A. Lee... Dynetics , Inc.) Steven Kay and Quan Ding (University of Rhode Island) Sean M. O’Rourke and A. Lee Swindlehurst (University of California, Irvine

  18. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    ERIC Educational Resources Information Center

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  19. Hopc: a Novel Similarity Metric Based on Geometric Structural Properties for Multi-Modal Remote Sensing Image Matching

    NASA Astrophysics Data System (ADS)

    Ye, Yuanxin; Shen, Li

    2016-06-01

    Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

  20. Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation.

    PubMed

    Wang, Hongzhi; Yushkevich, Paul A

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  1. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  2. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  3. Guidelines for the implementation of an open source information system

    SciTech Connect

    Doak, J.; Howell, J.A.

    1995-08-01

    This work was initially performed for the International Atomic Energy Agency (IAEA) to help with the Open Source Task of the 93 + 2 Initiative; however, the information should be of interest to anyone working with open sources. The authors cover all aspects of an open source information system (OSIS) including, for example, identifying relevant sources, understanding copyright issues, and making information available to analysts. They foresee this document as a reference point that implementors of a system could augment for their particular needs. The primary organization of this document focuses on specific aspects, or components, of an OSIS; they describe each component and often make specific recommendations for its implementation. This document also contains a section discussing the process of collecting open source data and a section containing miscellaneous information. The appendix contains a listing of various providers, producers, and databases that the authors have come across in their research.

  4. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  5. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  6. Open Source Software Licenses for Livermore National Laboratory

    SciTech Connect

    Busby, L.

    2000-08-10

    This paper attempts to develop supporting material in an effort to provide new options for licensing Laboratory-created software. Where employees and the Lab wish to release software codes as so-called ''Open Source'', they need, at a minimum, new licensing language for their released products. Several open source software licenses are reviewed to understand their common elements, and develop recommendations regarding new language.

  7. Learning from hackers: open-source clinical trials.

    PubMed

    Dunn, Adam G; Day, Richard O; Mandl, Kenneth D; Coiera, Enrico

    2012-05-02

    Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. A similar gap was addressed in the software industry by their open-source software movement. Here, we examine how the social and technical principles of the movement can guide the growth of an open-source clinical trial community.

  8. Open Source Intelligence "OSINT": Issues for Congress

    DTIC Science & Technology

    2008-01-28

    programs of the Soviet Union and towards the disparate threats posed by emerging post-Cold War threats. Collection strategies shifted from sophisticated...he stated, “Open source intelligence is the outer pieces of the jigsaw puzzle, without which one can neither begin nor complete the puzzle ... open...17 Some open source proponents view such information as constituting more than just the “the outer pieces of the jigsaw puzzle,” but rather every bit

  9. Multi-modality registration via multi-scale textural and spectral embedding representations

    NASA Astrophysics Data System (ADS)

    Li, Lin; Rusu, Mirabela; Viswanath, Satish; Penzias, Gregory; Pahwa, Shivani; Gollamudi, Jay; Madabhushi, Anant

    2016-03-01

    Intensity-based similarity measures assume that the original signal intensity of different modality images can provide statistically consistent information regarding the two modalities to be co-registered. In multi-modal registration problems, however, intensity-based similarity measures are often inadequate to identify an optimal transformation. Texture features can improve the performance of the multi-modal co-registration by providing more similar appearance representations of the two images to be co-registered, compared to the signal intensity representations. Furthermore, texture features extracted at different length scales (neighborhood sizes) can reveal similar underlying structural attributes between the images to be co-registered similarities that may not be discernible on the signal intensity representation alone. However one limitation of using texture features is that a number of them may be redundant and dependent and hence there is a need to identify non-redundant representations. Additionally it is not clear which features at which specific scales reveal similar attributes across the images to be co-registered. To address this problem, we introduced a novel approach for multimodal co-registration that employs new multi-scale image representations. Our approach comprises 4 distinct steps: (1) texure feature extraction at each length scale within both the target and template images, (2) independent component analysis (ICA) at each texture feature length scale, and (3) spectrally embedding (SE) the ICA components (ICs) obtained for the texture features at each length scale, and finally (4) identifying and combining the optimal length scales at which to perform the co-registration. To combine and co-register across different length scales, -mutual information (-MI) was applied in the high dimensional space of spectral embedding vectors to facilitate co-registration. To validate our multi-scale co-registration approach, we aligned 45 pairs of prostate

  10. A molecular receptor targeted, hydroxyapatite nanocrystal based multi-modal contrast agent.

    PubMed

    Ashokan, Anusha; Menon, Deepthy; Nair, Shantikumar; Koyakutty, Manzoor

    2010-03-01

    Multi-modal molecular imaging can significantly improve the potential of non-invasive medical diagnosis by combining basic anatomical descriptions with in-depth phenotypic characteristics of disease. Contrast agents with multifunctional properties that can sense and enhance the signature of specific molecular markers, together with high biocompatibility are essential for combinatorial molecular imaging approaches. Here, we report a multi-modal contrast agent based on hydroxyapatite nanocrystals (nHAp), which is engineered to show simultaneous contrast enhancement for three major molecular imaging techniques such as magnetic resonance imaging (MRI), X-ray imaging and near-infrared (NIR) fluorescence imaging. Monodispersed nHAp crystals of average size approximately 30 nm and hexagonal crystal structure were in situ doped with multiple rare-earth impurities by a surfactant-free, aqueous wet-chemical method at 100 degrees C. Doping of nHAp with Eu(3+) (3 at%) resulted bright near-infrared fluorescence (700 nm) due to efficient (5)D(0)-(7)F(4) electronic transition and co-doping with Gd(3+) resulted enhanced paramagnetic longitudinal relaxivity (r(1) approximately 12 mM(-1) s(-1)) suitable for T(1) weighted MR imaging together with approximately 80% X-ray attenuation suitable for X-ray contrast imaging. Capability of MF-nHAp to specifically target and enhance the signature of molecular receptors (folate) in cancer cells was realized by carbodiimide grafting of cell-membrane receptor ligand folic acid (FA) on MF-nHAp surface aminized with dendrigraft polymer, polyethyleneimine (PEI). The FA-PEI-MF-nHAp conjugates showed specific aggregation on FR(+ve) cells while leaving the negative control cells untouched. Nanotoxicity evaluation of this multifunctional nHAp carried out on primary human endothelial cells (HUVEC), normal mouse lung fibroblast cell line (L929), human nasopharyngeal carcinoma (KB) and human lung cancer cell line (A549) revealed no apparent toxicity even

  11. Microarray Meta-Analysis and Cross-Platform Normalization: Integrative Genomics for Robust Biomarker Discovery

    PubMed Central

    Walsh, Christopher J.; Hu, Pingzhao; Batt, Jane; Dos Santos, Claudia C.

    2015-01-01

    The diagnostic and prognostic potential of the vast quantity of publicly-available microarray data has driven the development of methods for integrating the data from different microarray platforms. Cross-platform integration, when appropriately implemented, has been shown to improve reproducibility and robustness of gene signature biomarkers. Microarray platform integration can be conceptually divided into approaches that perform early stage integration (cross-platform normalization) versus late stage data integration (meta-analysis). A growing number of statistical methods and associated software for platform integration are available to the user, however an understanding of their comparative performance and potential pitfalls is critical for best implementation. In this review we provide evidence-based, practical guidance to researchers performing cross-platform integration, particularly with an objective to discover biomarkers. PMID:27600230

  12. Determining Pain Detection and Tolerance Thresholds Using an Integrated, Multi-Modal Pain Task Battery

    PubMed Central

    Hay, Justin L.; Okkerse, Pieter; van Amerongen, Guido; Groeneveld, Geert Jan

    2016-01-01

    Human pain models are useful in the assessing the analgesic effect of drugs, providing information about a drug's pharmacology and identify potentially suitable therapeutic populations. The need to use a comprehensive battery of pain models is highlighted by studies whereby only a single pain model, thought to relate to the clinical situation, demonstrates lack of efficacy. No single experimental model can mimic the complex nature of clinical pain. The integrated, multi-modal pain task battery presented here encompasses the electrical stimulation task, pressure stimulation task, cold pressor task, the UVB inflammatory model which includes a thermal task and a paradigm for inhibitory conditioned pain modulation. These human pain models have been tested for predicative validity and reliability both in their own right and in combination, and can be used repeatedly, quickly, in short succession, with minimum burden for the subject and with a modest quantity of equipment. This allows a drug to be fully characterized and profiled for analgesic effect which is especially useful for drugs with a novel or untested mechanism of action. PMID:27166581

  13. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  14. A multi-modal treatment approach for the shoulder: A 4 patient case series

    PubMed Central

    Pribicevic, Mario; Pollard, Henry

    2005-01-01

    Background This paper describes the clinical management of four cases of shoulder impingement syndrome using a conservative multimodal treatment approach. Clinical Features Four patients presented to a chiropractic clinic with chronic shoulder pain, tenderness in the shoulder region and a limited range of motion with pain and catching. After physical and orthopaedic examination a clinical diagnosis of shoulder impingement syndrome was reached. The four patients were admitted to a multi-modal treatment protocol including soft tissue therapy (ischaemic pressure and cross-friction massage), 7 minutes of phonophoresis (driving of medication into tissue with ultrasound) with 1% cortisone cream, diversified spinal and peripheral joint manipulation and rotator cuff and shoulder girdle muscle exercises. The outcome measures for the study were subjective/objective visual analogue pain scales (VAS), range of motion (goniometer) and return to normal daily, work and sporting activities. All four subjects at the end of the treatment protocol were symptom free with all outcome measures being normal. At 1 month follow up all patients continued to be symptom free with full range of motion and complete return to normal daily activities. Conclusion This case series demonstrates the potential benefit of a multimodal chiropractic protocol in resolving symptoms associated with a suspected clinical diagnosis of shoulder impingement syndrome. PMID:16168053

  15. Multi-structure segmentation of multi-modal brain images using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Eun Young; Johnson, Hans

    2010-03-01

    A method for simultaneous segmentation of multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.

  16. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI.

    PubMed

    Zhuang, Xiahai; Shen, Juan

    2016-07-01

    A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min.

  17. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    NASA Astrophysics Data System (ADS)

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-01

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R&D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  18. Multi-focus and multi-modal fusion: a study of multi-resolution transforms

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Thomas, Millicent

    2016-05-01

    Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.

  19. Eigenanatomy: Sparse Dimensionality Reduction for Multi-Modal Medical Image Analysis

    PubMed Central

    Kandel, Benjamin M.; Wang, Danny JJ; Gee, James C.; Avants, Brian B.

    2014-01-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into mult-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. PMID:25448483

  20. Nano-sensitizers for multi-modality optical diagnostic imaging and therapy of cancer

    NASA Astrophysics Data System (ADS)

    Olivo, Malini; Lucky, Sasidharan S.; Bhuvaneswari, Ramaswamy; Dendukuri, Nagamani

    2011-07-01

    We report novel bioconjugated nanosensitizers as optical and therapeutic probes for the detection, monitoring and treatment of cancer. These nanosensitisers, consisting of hypericin loaded bioconjugated gold nanoparticles, can act as tumor cell specific therapeutic photosensitizers for photodynamic therapy coupled with additional photothermal effects rendered by plasmonic heating effects of gold nanoparticles. In addition to the therapeutic effects, the nanosensitizer can be developed as optical probes for state-of-the-art multi-modality in-vivo optical imaging technology such as in-vivo 3D confocal fluorescence endomicroscopic imaging, optical coherence tomography (OCT) with improved optical contrast using nano-gold and Surface Enhanced Raman Scattering (SERS) based imaging and bio-sensing. These techniques can be used in tandem or independently as in-vivo optical biopsy techniques to specifically detect and monitor specific cancer cells in-vivo. Such novel nanosensitizer based optical biopsy imaging technique has the potential to provide an alternative to tissue biopsy and will enable clinicians to make real-time diagnosis, determine surgical margins during operative procedures and perform targeted treatment of cancers.

  1. Hybrid parameter identification of a multi-modal underwater soft robot.

    PubMed

    Giorgio-Serchi, F; Arienti, A; Corucci, F; Giorelli, M; Laschi, C

    2017-02-28

    We introduce an octopus-inspired, underwater, soft-bodied robot capable of performing waterborne pulsed-jet propulsion and benthic legged-locomotion. This vehicle consists for as much as 80% of its volume of rubber-like materials so that structural flexibility is exploited as a key element during both modes of locomotion. The high bodily softness, the unconventional morphology and the non-stationary nature of its propulsion mechanisms require dynamic characterization of this robot to be dealt with by ad hoc techniques. We perform parameter identification by resorting to a hybrid optimization approach where the characterization of the dual ambulatory strategies of the robot is performed in a segregated fashion. A least squares-based method coupled with a genetic algorithm-based method is employed for the swimming and the crawling phases, respectively. The outcomes bring evidence that compartmentalized parameter identification represents a viable protocol for multi-modal vehicles characterization. However, the use of static thrust recordings as the input signal in the dynamic determination of shape-changing self-propelled vehicles is responsible for the critical underestimation of the quadratic drag coefficient.

  2. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  3. Multi-modal Patient Cohort Identification from EEG Report and Signal Data

    PubMed Central

    Goodwin, Travis R.; Harabagiu, Sanda M.

    2016-01-01

    Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%. PMID:28269938

  4. Anticipation by multi-modal association through an artificial mental imagery process

    NASA Astrophysics Data System (ADS)

    Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno

    2015-01-01

    Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.

  5. Fusion of mass spectrometry and microscopy: a multi-modality paradigm for molecular tissue mapping

    PubMed Central

    Van de Plas, Raf; Yang, Junhai; Spraggins, Jeffrey; Caprioli, Richard M.

    2015-01-01

    A new predictive imaging modality is created through the ‘fusion’ of two distinct technologies: imaging mass spectrometry (IMS) and microscopy. IMS-generated molecular maps, rich in chemical information but having coarse spatial resolution, are combined with optical microscopy maps, which have relatively low chemical specificity but high spatial information. The resulting images combine the advantages of both technologies, enabling prediction of a molecular distribution both at high spatial resolution and with high chemical specificity. Multivariate regression is used to model variables in one technology, using variables from the other technology. Several applications demonstrate the remarkable potential of image fusion: (i) ‘sharpening’ of IMS images, which uses microscopy measurements to predict ion distributions at a spatial resolution that exceeds that of measured ion images by ten times or more; (ii) prediction of ion distributions in tissue areas that were not measured by IMS; and (iii) enrichment of biological signals and attenuation of instrumental artifacts, revealing insights that are not easily extracted from either microscopy or IMS separately. Image fusion enables a new multi-modality paradigm for tissue exploration whereby mining relationships between different imaging sensors yields novel imaging modalities that combine and surpass what can be gleaned from the individual technologies alone. PMID:25707028

  6. Performance processes within affect-related performance zones: a multi-modal investigation of golf performance.

    PubMed

    van der Lei, Harry; Tenenbaum, Gershon

    2012-12-01

    Individual affect-related performance zones (IAPZs) method utilizing Kamata et al. (J Sport Exerc Psychol 24:189-208, 2002) probabilistic model of determining the individual zone of optimal functioning was utilized as idiosyncratic affective patterns during golf performance. To do so, three male golfers of a varsity golf team were observed during three rounds of golf competition. The investigation implemented a multi-modal assessment approach in which the probabilistic relationship between affective states and both, performance process and performance outcome, measures were determined. More specifically, introspective (i.e., verbal reports) and objective (heart rate and respiration rate) measures of arousal were incorporated to examine the relationships between arousal states and both, process components (i.e., routine consistency, timing), and outcome scores related to golf performance. Results revealed distinguishable and idiosyncratic IAPZs associated with physiological and introspective measures for each golfer. The associations between the IAPZs and decision-making or swing/stroke execution were strong and unique for each golfer. Results are elaborated using cognitive and affect-related concepts, and applications for practitioners are provided.

  7. Stability-Weighted Matrix Completion of Incomplete Multi-modal Data for Disease Diagnosis

    PubMed Central

    Thung, Kim-Han; Adeli, Ehsan; Yap, Pew-Thian

    2016-01-01

    Effective utilization of heterogeneous multi-modal data for Alzheimer’s Disease (AD) diagnosis and prognosis has always been hampered by incomplete data. One method to deal with this is low-rank matrix completion (LRMC), which simultaneous imputes missing data features and target values of interest. Although LRMC yields reasonable results, it implicitly weights features from all the modalities equally, ignoring the differences in discriminative power of features from different modalities. In this paper, we propose stability-weighted LRMC (swLRMC), an LRMC improvement that weights features and modalities according to their importance and reliability. We introduce a method, called stability weighting, to utilize subsampling techniques and outcomes from a range of hyper-parameters of sparse feature learning to obtain a stable set of weights. Incorporating these weights into LRMC, swLRMC can better account for differences in features and modalities for improving diagnosis. Experimental results confirm that the proposed method outperforms the conventional LRMC, feature-selection based LRMC, and other state-of-the-art methods. PMID:28286884

  8. MINC 2.0: A Flexible Format for Multi-Modal Images

    PubMed Central

    Vincent, Robert D.; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L.; Fonov, Vladimir S.; Robbins, Steven M.; Baghdadi, Leila; Lerch, Jason; Sled, John G.; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P.; Collins, D. Louis; Evans, Alan C.

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities. PMID:27563289

  9. Holographic Raman tweezers controlled by multi-modal natural user interface

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltán; Keša, Peter; Nikorovič, Matej; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Valušová, Eva; Antalík, Marián; Zemánek, Pavel

    2016-01-01

    Holographic optical tweezers provide a contactless way to trap and manipulate several microobjects independently in space using focused laser beams. Although the methods of fast and efficient generation of optical traps are well developed, their user friendly control still lags behind. Even though several attempts have appeared recently to exploit touch tablets, 2D cameras, or Kinect game consoles, they have not yet reached the level of natural human interface. Here we demonstrate a multi-modal ‘natural user interface’ approach that combines finger and gaze tracking with gesture and speech recognition. This allows us to select objects with an operator’s gaze and voice, to trap the objects and control their positions via tracking of finger movement in space and to run semi-automatic procedures such as acquisition of Raman spectra from preselected objects. This approach takes advantage of the power of human processing of images together with smooth control of human fingertips and downscales these skills to control remotely the motion of microobjects at microscale in a natural way for the human operator.

  10. The integration of quantitative multi-modality imaging data into mathematical models of tumors

    NASA Astrophysics Data System (ADS)

    Atuegwu, Nkiruka C.; Gore, John C.; Yankeelov, Thomas E.

    2010-05-01

    Quantitative imaging data obtained from multiple modalities may be integrated into mathematical models of tumor growth and treatment response to achieve additional insights of practical predictive value. We show how this approach can describe the development of tumors that appear realistic in terms of producing proliferating tumor rims and necrotic cores. Two established models (the logistic model with and without the effects of treatment) and one novel model built a priori from available imaging data have been studied. We modify the logistic model to predict the spatial expansion of a tumor driven by tumor cell migration after a voxel's carrying capacity has been reached. Depending on the efficacy of a simulated cytoxic treatment, we show that the tumor may either continue to expand, or contract. The novel model includes hypoxia as a driver of tumor cell movement. The starting conditions for these models are based on imaging data related to the tumor cell number (as estimated from diffusion-weighted MRI), apoptosis (from 99mTc-Annexin-V SPECT), cell proliferation and hypoxia (from PET). We conclude that integrating multi-modality imaging data into mathematical models of tumor growth is a promising combination that can capture the salient features of tumor growth and treatment response and this indicates the direction for additional research.

  11. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    SciTech Connect

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-25

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  12. Multi-modal target detection for autonomous wide area search and surveillance

    NASA Astrophysics Data System (ADS)

    Breckon, Toby P.; Gaszczak, Anna; Han, Jiwan; Eichner, Marcin L.; Barnes, Stuart E.

    2013-10-01

    Generalised wide are search and surveillance is a common-place tasking for multi-sensory equipped autonomous systems. Here we present on a key supporting topic to this task - the automatic interpretation, fusion and detected target reporting from multi-modal sensor information received from multiple autonomous platforms deployed for wide-area environment search. We detail the realization of a real-time methodology for the automated detection of people and vehicles using combined visible-band (EO), thermal-band (IR) and radar sensing from a deployed network of multiple autonomous platforms (ground and aerial). This facilities real-time target detection, reported with varying levels of confidence, using information from both multiple sensors and multiple sensor platforms to provide environment-wide situational awareness. A range of automatic classification approaches are proposed, driven by underlying machine learning techniques, that facilitate the automatic detection of either target type with cross-modal target confirmation. Extended results are presented that show both the detection of people and vehicles under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance evaluation is presented at an episodic level with individual classifiers optimized for maximal each object of interest (vehicle/person) detection over a given search path/pattern of the environment, across all sensors and modalities, rather than on a per sensor sample basis. Episodic target detection, evaluated over a number of wide-area environment search and reporting tasks, generally exceeds 90%+ for the targets considered here.

  13. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  14. Multi-Modal Neuroimaging Feature Learning for Multi-Class Diagnosis of Alzheimer’s Disease

    PubMed Central

    Liu, Siqi; Liu, Sidong; Cai, Weidong; Che, Hangyu; Pujol, Sonia; Kikinis, Ron; Feng, Dagan; Fulham, Michael J.

    2015-01-01

    The accurate diagnosis of Alzheimers disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer aided diagnosis (CAD) of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multi-modal neuroimaging features in one setting and has the potential to require less labelled data. A performance gain was achieved in both binary classification and multi-class classification of AD. The advantages and limitations of the proposed framework are discussed. PMID:25423647

  15. a Framework for AN Open Source Geospatial Certification Model

    NASA Astrophysics Data System (ADS)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  16. Technology collaboration by means of an open source government

    NASA Astrophysics Data System (ADS)

    Berardi, Steven M.

    2009-05-01

    The idea of open source software originally began in the early 1980s, but it never gained widespread support until recently, largely due to the explosive growth of the Internet. Only the Internet has made this kind of concept possible, bringing together millions of software developers from around the world to pool their knowledge. The tremendous success of open source software has prompted many corporations to adopt the culture of open source and thus share information they previously held secret. The government, and specifically the Department of Defense (DoD), could also benefit from adopting an open source culture. In acquiring satellite systems, the DoD often builds walls between program offices, but installing doors between programs can promote collaboration and information sharing. This paper addresses the challenges and consequences of adopting an open source culture to facilitate technology collaboration for DoD space acquisitions. DISCLAIMER: The views presented here are the views of the author, and do not represent the views of the United States Government, United States Air Force, or the Missile Defense Agency.

  17. The Imagery Exchange (TIE): Open Source Imagery Management System

    NASA Astrophysics Data System (ADS)

    Alarcon, C.; Huang, T.; Thompson, C. K.; Roberts, J. T.; Hall, J. R.; Cechini, M.; Schmaltz, J. E.; McGann, J. M.; Boller, R. A.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    The NASA's Global Imagery Browse Service (GIBS) is the Earth Observation System (EOS) imagery solution for delivering global, full-resolution satellite imagery in a highly responsive manner. GIBS consists of two major subsystems, OnEarth and The Imagery Exchange (TIE). TIE is the GIBS horizontally scaled imagery workflow manager component, an Open Archival Information System (OAIS) responsible for orchestrating the acquisition, preparation, generation, and archiving of imagery to be served by OnEarth. TIE is an extension of the Data Management and Archive System (DMAS), a high performance data management system developed at the Jet Propulsion Laboratory by leveraging open source tools and frameworks, which includes Groovy/Grails, Restlet, Apache ZooKeeper, Apache Solr, and other open source solutions. This presentation focuses on the application of Open Source technologies in developing a horizontally scaled data system like DMAS and TIE. As part of our commitment in contributing back to the open source community, TIE is in the process of being open sourced. This presentation will also cover our current effort in getting TIE in to the hands of the community from which we benefited from.

  18. Your Personal Analysis Toolkit - An Open Source Solution

    NASA Astrophysics Data System (ADS)

    Mitchell, T.

    2009-12-01

    Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!

  19. Comparison of open-source linear programming solvers.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  20. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    NASA Astrophysics Data System (ADS)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  1. Trends and challenges in open source software (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Aylward, Stephen

    2013-10-01

    Over the past decade, the field of medical image analysis research has undergone a rapid evolution. It was a collection of disconnected efforts that were burdened by mundane coding and file I/O tasks. It is now a collaborative community that has embraced open-source software as a shared foundation, reducing mundane coding and I/O burdens, promoting replicable research, and accelerating the pace of research and product development. This talk will review the history and current state of open-source software in medical image analysis research, will discuss the role of intellectual property in research, and will present emerging trends and technologies relevant to the growing importance of open-source software.

  2. A Framework for the Systematic Collection of Open Source Intelligence

    SciTech Connect

    Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  3. A connectivity-based test-retest dataset of multi-modal magnetic resonance imaging in young healthy adults.

    PubMed

    Lin, Qixiang; Dai, Zhengjia; Xia, Mingrui; Han, Zaizhu; Huang, Ruiwang; Gong, Gaolang; Liu, Chao; Bi, Yanchao; He, Yong

    2015-01-01

    Recently, magnetic resonance imaging (MRI) has been widely used to investigate the structures and functions of the human brain in health and disease in vivo. However, there are growing concerns about the test-retest reliability of structural and functional measurements derived from MRI data. Here, we present a test-retest dataset of multi-modal MRI including structural MRI (S-MRI), diffusion MRI (D-MRI) and resting-state functional MRI (R-fMRI). Fifty-seven healthy young adults (age range: 19-30 years) were recruited and completed two multi-modal MRI scan sessions at an interval of approximately 6 weeks. Each scan session included R-fMRI, S-MRI and D-MRI data. Additionally, there were two separated R-fMRI scans at the beginning and at the end of the first session (approximately 20 min apart). This multi-modal MRI dataset not only provides excellent opportunities to investigate the short- and long-term test-retest reliability of the brain's structural and functional measurements at the regional, connectional and network levels, but also allows probing the test-retest reliability of structural-functional couplings in the human brain.

  4. Female preference for multi-modal courtship: multiple signals are important for male mating success in peacock spiders

    PubMed Central

    Girard, Madeline B.; Elias, Damian O.; Kasumovic, Michael M.

    2015-01-01

    A long-standing goal for biologists has been to understand how female preferences operate in systems where males have evolved numerous sexually selected traits. Jumping spiders of the Maratus genus are exceptionally sexually dimorphic in appearance and signalling behaviour. Presumably, strong sexual selection by females has played an important role in the evolution of complex signals displayed by males of this group; however, this has not yet been demonstrated. In fact, despite apparent widespread examples of sexual selection in nature, empirical evidence is relatively sparse, especially for species employing multiple modalities for intersexual communication. In order to elucidate whether female preference can explain the evolution of multi-modal signalling traits, we ran a series of mating trials using Maratus volans. We used video recordings and laser vibrometry to characterize, quantify and examine which male courtship traits predict various metrics of mating success. We found evidence for strong sexual selection on males in this system, with success contingent upon a combination of visual and vibratory displays. Additionally, independently produced, yet correlated suites of multi-modal male signals are linked to other aspects of female peacock spider behaviour. Lastly, our data provide some support for both the redundant signal and multiple messages hypotheses for the evolution of multi-modal signalling. PMID:26631566

  5. Comparing uni-modal and multi-modal therapies for improving writing in acquired dysgraphia after stroke.

    PubMed

    Thiel, Lindsey; Sage, Karen; Conroy, Paul

    2016-01-01

    Writing therapy studies have been predominantly uni-modal in nature; i.e., their central therapy task has typically been either writing to dictation or copying and recalling words. There has not yet been a study that has compared the effects of a uni-modal to a multi-modal writing therapy in terms of improvements to spelling accuracy. A multiple-case study with eight participants aimed to compare the effects of a uni-modal and a multi-modal therapy on the spelling accuracy of treated and untreated target words at immediate and follow-up assessment points. A cross-over design was used and within each therapy a matched set of words was targeted. These words and a matched control set were assessed before as well as immediately after each therapy and six weeks following therapy. The two approaches did not differ in their effects on spelling accuracy of treated or untreated items or degree of maintenance. All participants made significant improvements on treated and control items; however, not all improvements were maintained at follow-up. The findings suggested that multi-modal therapy did not have an advantage over uni-modal therapy for the participants in this study. Performance differences were instead driven by participant variables.

  6. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    NASA Astrophysics Data System (ADS)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  7. Human genome and open source: balancing ethics and business.

    PubMed

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives.

  8. Open-source software for radiologists: a primer.

    PubMed

    Scarsbrook, A F

    2007-02-01

    There is a wide variety of free (open-source) software available via the Internet which may be of interest to radiologists. This article will explore the use of open-source software in radiology to help streamline academic workflow and improve general efficiency and effectiveness by highlighting a number of the most useful applications currently available. These include really simple syndication applications, e-mail management, spreadsheet, word processing, database and presentation packages, as well as image and video editing software. How to incorporate this software into radiological practice will also be discussed.

  9. Multi-modal Learning-based Pre-operative Targeting in Deep Brain Stimulation Procedures.

    PubMed

    Liu, Yuan; Dawant, Benoit M

    2016-02-01

    Deep brain stimulation, as a primary surgical treatment for various neurological disorders, involves implanting electrodes to stimulate target nuclei within millimeter accuracy. Accurate pre-operative target selection is challenging due to the poor contrast in its surrounding region in MR images. In this paper, we present a learning-based method to automatically and rapidly localize the target using multi-modal images. A learning-based technique is applied first to spatially normalize the images in a common coordinate space. Given a point in this space, we extract a heterogeneous set of features that capture spatial and intensity contextual patterns at different scales in each image modality. Regression forests are used to learn a displacement vector of this point to the target. The target is predicted as a weighted aggregation of votes from various test samples, leading to a robust and accurate solution. We conduct five-fold cross validation using 100 subjects and compare our method to three indirect targeting methods, a state-of-the-art statistical atlas-based approach, and two variations of our method that use only a single modality image. With an overall error of 2.63±1.37mm, our method improves upon the single modality-based variations and statistically significantly outperforms the indirect targeting ones. Our technique matches state-of-the-art registration methods but operates on completely different principles. Both techniques can be used in tandem in processing pipelines operating on large databases or in the clinical flow for automated error detection.

  10. A NOVEL MULTI-MODAL DRUG REPURPOSING APPROACH FOR IDENTIFICATION OF POTENT ACK1 INHIBITORSǂ

    PubMed Central

    Phatak, Sharangdhar S.; Zhang, Shuxing

    2013-01-01

    Exploiting drug polypharmacology to identify novel modes of actions for drug repurposing has gained significant attentions in the current era of weak drug pipelines. From a serendipitous to systematic or rational ways, a variety of unimodal computational approaches have been developed but the complexity of the problem clearly needs multi-modal approaches for better solutions. In this study, we propose an integrative computational framework based on classical structure-based drug design and chemical-genomic similarity methods, combined with molecular graph theories for this task. Briefly, a pharmacophore modeling method was employed to guide the selection of docked poses resulting from our high-throughput virtual screening. We then evaluated if complementary results (hits missed by docking) can be obtained by using a novel chemo-genomic similarity approach based on chemical/sequence information. Finally, we developed a bipartite-graph based on the extensive data curation of DrugBank, PDB, and UniProt. This drug-target bipartite graph was used to assess similarity of different inhibitors based on their connections to other compounds and targets. The approaches were applied to the repurposing of existing drugs against ACK1, a novel cancer target significantly overexpressed in breast and prostate cancers during their progression. Upon screening of ~1,447 marketed drugs, a final set of 10 hits were selected for experimental testing. Among them, four drugs were identified as potent ACK1 inhibitors. Especially the inhibition of ACK1 by Dasatinib was as strong as IC50=1nM. We anticipate that our novel, integrative strategy can be easily extended to other biological targets with a more comprehensive coverage of known bio-chemical space for repurposing studies. PMID:23424109

  11. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  12. Multiscale and multi-modality visualization of angiogenesis in a human breast cancer model.

    PubMed

    Cebulla, Jana; Kim, Eugene; Rhie, Kevin; Zhang, Jiangyang; Pathak, Arvind P

    2014-07-01

    Angiogenesis in breast cancer helps fulfill the metabolic demands of the progressing tumor and plays a critical role in tumor metastasis. Therefore, various imaging modalities have been used to characterize tumor angiogenesis. While micro-CT (μCT) is a powerful tool for analyzing the tumor microvascular architecture at micron-scale resolution, magnetic resonance imaging (MRI) with its sub-millimeter resolution is useful for obtaining in vivo vascular data (e.g. tumor blood volume and vessel size index). However, integration of these microscopic and macroscopic angiogenesis data across spatial resolutions remains challenging. Here we demonstrate the feasibility of 'multiscale' angiogenesis imaging in a human breast cancer model, wherein we bridge the resolution gap between ex vivo μCT and in vivo MRI using intermediate resolution ex vivo MR microscopy (μMRI). To achieve this integration, we developed suitable vessel segmentation techniques for the ex vivo imaging data and co-registered the vascular data from all three imaging modalities. We showcase two applications of this multiscale, multi-modality imaging approach: (1) creation of co-registered maps of vascular volume from three independent imaging modalities, and (2) visualization of differences in tumor vasculature between viable and necrotic tumor regions by integrating μCT vascular data with tumor cellularity data obtained using diffusion-weighted MRI. Collectively, these results demonstrate the utility of 'mesoscopic' resolution μMRI for integrating macroscopic in vivo MRI data and microscopic μCT data. Although focused on the breast tumor xenograft vasculature, our imaging platform could be extended to include additional data types for a detailed characterization of the tumor microenvironment and computational systems biology applications.

  13. A novel technique to incorporate structural prior information into multi-modal tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Ourselin, Sébastien; Hutton, Brian F.; Dobson, Katherine J.; Kaestner, Anders P.; Lionheart, William R. B.; Withers, Philip J.; Lee, Peter D.; Arridge, Simon R.

    2014-06-01

    There has been a rapid expansion of multi-modal imaging techniques in tomography. In biomedical imaging, patients are now regularly imaged using both single photon emission computed tomography (SPECT) and x-ray computed tomography (CT), or using both positron emission tomography and magnetic resonance imaging (MRI). In non-destructive testing of materials both neutron CT (NCT) and x-ray CT are widely applied to investigate the inner structure of material or track the dynamics of physical processes. The potential benefits from combining modalities has led to increased interest in iterative reconstruction algorithms that can utilize the data from more than one imaging mode simultaneously. We present a new regularization term in iterative reconstruction that enables information from one imaging modality to be used as a structural prior to improve resolution of the second modality. The regularization term is based on a modified anisotropic tensor diffusion filter, that has shape-adapted smoothing properties. By considering the underlying orientations of normal and tangential vector fields for two co-registered images, the diffusion flux is rotated and scaled adaptively to image features. The images can have different greyscale values and different spatial resolutions. The proposed approach is particularly good at isolating oriented features in images which are important for medical and materials science applications. By enhancing the edges it enables both easy identification and volume fraction measurements aiding segmentation algorithms used for quantification. The approach is tested on a standard denoising and deblurring image recovery problem, and then applied to 2D and 3D reconstruction problems; thereby highlighting the capabilities of the algorithm. Using synthetic data from SPECT co-registered with MRI, and real NCT data co-registered with x-ray CT, we show how the method can be used across a range of imaging modalities.

  14. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    NASA Astrophysics Data System (ADS)

    Paelke, V.; Nebe, K.; Geiger, C.; Klompmaker, F.; Fischer, H.

    2012-07-01

    Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks) have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to control who is

  15. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  16. Implementation of a multi-modal mobile sensor system for surface and subsurface assessment of roadways

    NASA Astrophysics Data System (ADS)

    Wang, Ming; Birken, Ralf; Shahini Shamsabadi, Salar

    2015-03-01

    There are more than 4 million miles of roads and 600,000 bridges in the United States alone. On-going investments are required to maintain the physical and operational quality of these assets to ensure public's safety and prosperity of the economy. Planning efficient maintenance and repair (M&R) operations must be armed with a meticulous pavement inspection method that is non-disruptive, is affordable and requires minimum manual effort. The Versatile Onboard Traffic Embedded Roaming Sensors (VOTERS) project developed a technology able to cost- effectively monitor the condition of roadway systems to plan for the right repairs, in the right place, at the right time. VOTERS technology consists of an affordable, lightweight package of multi-modal sensor systems including acoustic, optical, electromagnetic, and GPS sensors. Vehicles outfitted with this technology would be capable of collecting information on a variety of pavement-related characteristics at both surface and subsurface levels as they are driven. By correlating the sensors' outputs with the positioning data collected in tight time synchronization, a GIS-based control center attaches a spatial component to all the sensors' measurements and delivers multiple ratings of the pavement every meter. These spatially indexed ratings are then leveraged by VOTERS decision making modules to plan the optimum M&R operations and predict the future budget needs. In 2014, VOTERS inspection results were validated by comparing them to the outputs of recent professionally done condition surveys of a local engineering firm for 300 miles of Massachusetts roads. Success of the VOTERS project portrays rapid, intelligent, and comprehensive evaluation of tomorrow's transportation infrastructure to increase public's safety, vitalize the economy, and deter catastrophic failures.

  17. Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data.

    PubMed

    Yuan, Lei; Wang, Yalin; Thompson, Paul M; Narayan, Vaibhav A; Ye, Jieping

    2012-01-01

    Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI's 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results.

  18. Multi-modal iterative adaptive processing (MIAP) performance in the discrimination mode for landmine detection

    NASA Astrophysics Data System (ADS)

    Yu, Yongli; Collins, Leslie M.

    2005-06-01

    Due to the nature of landmine detection, a high detection probability (Pd) is required to avoid casualties and injuries. However, high Pd is often obtained at the price of extremely high false alarm rates. It is widely accepted that no single sensor technology has the ability to achieve the required detection rate while keeping acceptably low false alarm rates for all types of mines in all types of soil and with all types of false targets. Remarkable advances in sensor technology for landmine detection have made multi-sensor fusion an attractive alternative to single sensor detection techniques. Hence, multi-sensor fusion mine detection systems, which use complementary sensor technologies, are proposed. Previously we proposed a new multi-sensor fusion algorithm called Multi-modal Iterative Adaptive Processing (MIAP), which incorporates information from multiple sensors in an adaptive Bayesian decision framework and the identification capabilities of multiple sensors are utilized to modify the statistical models utilized by the mine detector. Simulation results demonstrate the improvement in performance obtained using the MIAP algorithm. In this paper, we assume a hand-held mine detection system utilizing both an electromagnetic induction sensor (EMI) and a ground-penetrating radar (GPR). The hand-held mine detection sensors are designed to have two modes of operations: search mode and discrimination mode. Search mode generates an initial causal detection on the suspected location; and discrimination mode confirms whether there is a mine. The MIAP algorithm is applied in the discrimination mode for hand-held mine detection. The performance of the detector is evaluated on a data set collected by the government, and the performance is compared with the other traditional fusion results.

  19. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  20. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  1. Open Source Projects in Software Engineering Education: A Mapping Study

    ERIC Educational Resources Information Center

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  2. Chinese Localisation of Evergreen: An Open Source Integrated Library System

    ERIC Educational Resources Information Center

    Zou, Qing; Liu, Guoying

    2009-01-01

    Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…

  3. The Value of Open Source Software Tools in Qualitative Research

    ERIC Educational Resources Information Center

    Greenberg, Gary

    2011-01-01

    In an era of global networks, researchers using qualitative methods must consider the impact of any software they use on the sharing of data and findings. In this essay, I identify researchers' main areas of concern regarding the use of qualitative software packages for research. I then examine how open source software tools, wherein the publisher…

  4. Higher Education Sub-Cultures and Open Source Adoption

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  5. Critical Analysis on Open Source LMSs Using FCA

    ERIC Educational Resources Information Center

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  6. Faculty/Student Surveys Using Open Source Software

    ERIC Educational Resources Information Center

    Kaceli, Sali

    2004-01-01

    This session will highlight an easy survey package which lets non-technical users create surveys, administer surveys, gather results, and view statistics. This is an open source application all managed online via a web browser. By using phpESP, the faculty is given the freedom of creating various surveys at their convenience and link them to their…

  7. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  8. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  9. [Osirix: free and open-source software for medical imagery].

    PubMed

    Jalbert, F; Paoli, J R

    2008-02-01

    Osirix is a tool for diagnostic imagery, teaching and research tasks, which presents many possible applications in maxillofacial and oral surgery. It is a free and open-source software developed on Mac OS X (Apple) by Dr Antoine Rosset and Dr Osman Ratib, in the department of radiology and medical computing of Geneva (Switzerland).

  10. Is Open Source the ERP Cure-All?

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  11. Current challenges in open-source bioimage informatics.

    PubMed

    Cardona, Albert; Tomancak, Pavel

    2012-06-28

    We discuss the advantages and challenges of the open-source strategy in biological image analysis and argue that its full impact will not be realized without better support and recognition of software engineers' contributions to the biological sciences and more support of this development model from funders and institutions.

  12. Open Source Solutions for Libraries: ABCD vs Koha

    ERIC Educational Resources Information Center

    Macan, Bojan; Fernandez, Gladys Vanesa; Stojanovski, Jadranka

    2013-01-01

    Purpose: The purpose of this study is to present an overview of the two open source (OS) integrated library systems (ILS)--Koha and ABCD (ISIS family), to compare their "next-generation library catalog" functionalities, and to give comparison of other important features available through ILS modules. Design/methodology/approach: Two open source…

  13. Open source tools for ATR development and performance evaluation

    NASA Astrophysics Data System (ADS)

    Baumann, James M.; Dilsavor, Ronald L.; Stubbles, James; Mossing, John C.

    2002-07-01

    Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.

  14. The Case for Open Source Software in Digital Forensics

    NASA Astrophysics Data System (ADS)

    Zanero, Stefano; Huebner, Ewa

    In this introductory chapter we discuss the importance of the use of open source software (OSS), and in particular of free software (FLOSS) in computer forensics investigations including the identification, capture, preservation and analysis of digital evidence; we also discuss the importance of OSS in computer forensics

  15. Teaching Undergraduate Software Engineering Using Open Source Development Tools

    DTIC Science & Technology

    2012-01-01

    on Computer Science Education (SIGCSE 󈧏), 153- 158. Pandey, R. (2009). Exploiting web resources for teaching /learning best software design tips...Issues in Informing Science and Information Technology Volume 9, 2012 Teaching Undergraduate Software Engineering Using Open Source Development...multi-course sequence, to teach students both the theoretical concepts of soft- ware development as well as the practical aspects of developing software

  16. Bioclipse: an open source workbench for chemo- and bioinformatics

    PubMed Central

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl ES

    2007-01-01

    Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at . PMID:17316423

  17. Open Source Drug Discovery in Practice: A Case Study

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Background Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. Methodology/Principal Findings A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Conclusions/Significance Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality

  18. NASA's Open Source Software for Serving and Viewing Global Imagery

    NASA Astrophysics Data System (ADS)

    Roberts, J. T.; Alarcon, C.; Boller, R. A.; Cechini, M. F.; Gunnoe, T.; Hall, J. R.; Huang, T.; Ilavajhala, S.; King, J.; McGann, M.; Murphy, K. J.; Plesea, L.; Schmaltz, J. E.; Thompson, C. K.

    2014-12-01

    The NASA Global Imagery Browse Services (GIBS), which provide open access to an enormous archive of historical and near real time imagery from NASA supported satellite instruments, has also released most of its software to the general public as open source. The software packages, originally developed at the Jet Propulsion Laboratory and Goddard Space Flight Center, currently include: 1) the Meta Raster Format (MRF) GDAL driver—GDAL support for a specialized file format used by GIBS to store imagery within a georeferenced tile pyramid for exceptionally fast access; 2) OnEarth—a high performance Apache module used to serve tiles from MRF files via common web service protocols; 3) Worldview—a web mapping client to interactively browse global, full-resolution satellite imagery and download underlying data. Examples that show developers how to use GIBS with various mapping libraries and programs are also available. This stack of tools is intended to provide an out-of-the-box solution for serving any georeferenced imagery.Scientists as well as the general public can use the open source software for their own applications such as developing visualization interfaces for improved scientific understanding and decision support, hosting a repository of browse images to help find and discover satellite data, or accessing large datasets of geo-located imagery in an efficient manner. Open source users may also contribute back to NASA and the wider Earth Science community by taking an active role in evaluating and developing the software.This presentation will discuss the experiences of developing the software in an open source environment and useful lessons learned. To access the open source software repositories, please visit: https://github.com/nasa-gibs/

  19. Optimizing boundary detection via Simulated Search with applications to multi-modal heart segmentation.

    PubMed

    Peters, J; Ecabert, O; Meyer, C; Kneser, R; Weese, J

    2010-02-01

    Segmentation of medical images can be achieved with the help of model-based algorithms. Reliable boundary detection is a crucial component to obtain robust and accurate segmentation results and to enable full automation. This is especially important if the anatomy being segmented is too variable to initialize a mean shape model such that all surface regions are close to the desired contours. Several boundary detection algorithms are widely used in the literature. Most use some trained image appearance model to characterize and detect the desired boundaries. Although parameters of the boundary detection can vary over the model surface and are trained on images, their performance (i.e., accuracy and reliability of boundary detection) can only be assessed as an integral part of the entire segmentation algorithm. In particular, assessment of boundary detection cannot be done locally and independently on model parameterization and internal energies controlling geometric model properties. In this paper, we propose a new method for the local assessment of boundary detection called Simulated Search. This method takes any boundary detection function and evaluates its performance for a single model landmark in terms of an estimated geometric boundary detection error. In consequence, boundary detection can be optimized per landmark during model training. We demonstrate the success of the method for cardiac image segmentation. In particular we show that the Simulated Search improves the capture range and the accuracy of the boundary detection compared to a traditional training scheme. We also illustrate how the Simulated Search can be used to identify suitable classes of features when addressing a new segmentation task. Finally, we show that the Simulated Search enables multi-modal heart segmentation using a single algorithmic framework. On computed tomography and magnetic resonance images, average segmentation errors (surface-to-surface distances) for the four chambers and

  20. Transforming High School Classrooms with Free/Open Source Software: "It's Time for an Open Source Software Revolution"

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2008-01-01

    Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…

  1. Open Source and ROI: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    A switch to free open source software can minimize cost and allow funding to be diverted to equipment and other programs. For instance, the OpenOffice suite is an alternative to expensive basic application programs offered by major vendors. Many such programs on the market offer features seldom used in education but for which educators must pay.…

  2. Cross-platform learning: on the nature of children's learning from multiple media platforms.

    PubMed

    Fisch, Shalom M

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several recent studies to explore cross-platform learning (i.e., learning from combined use of multiple media platforms) and how such learning compares to learning from one medium. The paper discusses unique benefits of cross-platform learning, a theoretical mechanism to explain how these benefits might arise, and questions for future research in this emerging field.

  3. Comparison of open-source visual analytics toolkits

    NASA Astrophysics Data System (ADS)

    Harger, John R.; Crossno, Patricia J.

    2012-01-01

    We present the results of the first stage of a two-stage evaluation of open source visual analytics packages. This stage is a broad feature comparison over a range of open source toolkits. Although we had originally intended to restrict ourselves to comparing visual analytics toolkits, we quickly found that very few were available. So we expanded our study to include information visualization, graph analysis, and statistical packages. We examine three aspects of each toolkit: visualization functions, analysis capabilities, and development environments. With respect to development environments, we look at platforms, language bindings, multi-threading/parallelism, user interface frameworks, ease of installation, documentation, and whether the package is still being actively developed.

  4. Open source, open standards, and health care information systems.

    PubMed

    Reynolds, Carl J; Wyatt, Jeremy C

    2011-02-17

    Recognition of the improvements in patient safety, quality of patient care, and efficiency that health care information systems have the potential to bring has led to significant investment. Globally the sale of health care information systems now represents a multibillion dollar industry. As policy makers, health care professionals, and patients, we have a responsibility to maximize the return on this investment. To this end we analyze alternative licensing and software development models, as well as the role of standards. We describe how licensing affects development. We argue for the superiority of open source licensing to promote safer, more effective health care information systems. We claim that open source licensing in health care information systems is essential to rational procurement strategy.

  5. Virtual Machine for Computer Forensics - the Open Source Perspective

    NASA Astrophysics Data System (ADS)

    Bem, Derek

    In this paper we discuss the potential role of virtual environments in the analysis phase of computer forensics investigations. We argue that commercial closed source computer forensics software has certain limitations, and we propose a method which may lead to gradual shift to open source software (OSS). A brief overview of virtual environments and open source software tools is presented and discussed. Further we identify current limitations of virtual environments leading to the conclusion that the method is very promising, but at this point in time it can not replace conventional techniques of computer forensics analysis. We demonstrate that using Virtual Machines (VM) in Linux environments can complement the conventional techniques, and often can bring faster and verifiable results not dependent on proprietary, close source tools.

  6. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    have been put in place for the projects: • Community website dedicated to Open Chemistry projects • Git source code repositories (Kitware, mirrored...A10-110 Proposal A2-4714 Kitware, Inc. The Gerrit code review system,[12] developed by Google as an open - source project for the Android operating...with nightly software build testing on all three major platforms for merged code and testing of proposed changes using CDash@Home[13] (an open - source

  7. Cross-Platform JavaScript Coding: Shifting Sand Dunes and Shimmering Mirages.

    ERIC Educational Resources Information Center

    Merchant, David

    1999-01-01

    Most libraries don't have the resources to cross-platform and cross-version test all of their JavaScript coding. Many turn to WYSIWYG; however, WYSIWYG editors don't generally produce optimized coding. Web developers should: test their coding on at least one 3.0 browser, code by hand using tools to help speed that process up, and include a simple…

  8. GISCube, an Open Source Web-based GIS Application

    NASA Astrophysics Data System (ADS)

    Boustani, M.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    There are many Earth science projects and data systems being developed at the Jet Propulsion Laboratory, California Institute of Technology (JPL) that require the use of Geographic Information Systems (GIS). Three in particular are: (1) the JPL Airborne Snow Observatory (ASO) that measures the amount of water being generated from snow melt in mountains; (2) the Regional Climate Model Evaluation System (RCMES) that compares climate model outputs with remote sensing datasets in the context of model evaluation and the Intergovernmental Panel on Climate Change and for the U.S. National Climate Assessment and; (3) the JPL Snow Server that produces a snow and ice climatology for the Western US and Alaska, for the U.S. National Climate Assessment. Each of these three examples and all other earth science projects are strongly in need of having GIS and geoprocessing capabilities to process, visualize, manage and store GeoSpatial data. Beside some open source GIS libraries and some software like ArcGIS there are comparatively few open source, web-based and easy to use application that are capable of doing GIS processing and visualization. To address this, we present GISCube, an open source web-based GIS application that can store, visualize and process GIS and GeoSpatial data. GISCube is powered by Geothon, an open source python GIS cookbook. Geothon has a variety of Geoprocessing tools such data conversion, processing, spatial analysis and data management tools. GISCube has the capability of supporting a variety of well known GIS data formats in both vector and raster formats, and the system is being expanded to support NASA's and scientific data formats such as netCDF and HDF files. In this talk, we demonstrate how Earth science and other projects can benefit by using GISCube and Geothon, its current goals and our future work in the area.

  9. Open Source Intelligence - Doctrine’s Neglected Child

    DTIC Science & Technology

    2007-11-02

    1 Richard S . Friedman, “Open Source Intelligence,” Parameters (Summer 1998): 159; quoted in David Reed, “Aspiring to Spying...Richard S . Friedman, 162-163. 11 Wyn Bowen, 52. 12 Richard S . Friedman, 164; quoted in Ray Cline, “Introduction,” The Intelligence War (London...evacuation operations, counter-terrorist operations, foreign internal defense, peace operations, consequence management, and humanitarian assistance

  10. DESIGN NOTE: SCOUT - Surface Characterization Open-Source Universal Toolbox

    NASA Astrophysics Data System (ADS)

    Sacerdotti, F.; Porrino, A.; Butler, C.; Brinkmann, S.; Vermeulen, M.

    2002-02-01

    Surface topography plays a significant role in functional performance situations like friction, lubrication and wear. A European Community funded research programme on areal characterization of steel sheet has recently assisted research in this area. This article is dedicated to the software that supported most of the programme. Born as a rudimentary collection of procedures, it grew steadily to become an integrated package, later equipped with a graphical interface and circulated to the research community employing the Open-Source philosophy.

  11. Survivability as a Tool for Evaluating Open Source Software

    DTIC Science & Technology

    2015-06-01

    mistaken to mean “ free ” software . It may be true in some instances that open source software is offered free -of-charge, but there are other instances...upgrades, and maintenance [11]. Cost is always a big driver of software selection, and it should be considered even when using OSS acquired free of charge...source projects during their free time, and may build software because the commercial-off-the-shelf (COTS) software already in existence does not offer

  12. Open Source Software For Patient Data Management In Critical Care.

    PubMed

    Massaut, Jacques; Charretk, Nicolas; Gayraud, Olivia; Van Den Bergh, Rafael; Charles, Adelin; Edema, Nathalie

    2015-01-01

    We have previously developed a Patient Data Management System for Intensive Care based on Open Source Software. The aim of this work was to adapt this software to use in Emergency Departments in low resource environments. The new software includes facilities for utilization of the South African Triage Scale and prediction of mortality based on independent predictive factors derived from data from the Tabarre Emergency Trauma Center in Port au Prince, Haiti.

  13. An open source model for open access journal publication.

    PubMed

    Blesius, Carl R; Williams, Michael A; Holzbach, Ana; Huntley, Arthur C; Chueh, Henry

    2005-01-01

    We describe an electronic journal publication infrastructure that allows a flexible publication workflow, academic exchange around different forms of user submissions, and the exchange of articles between publishers and archives using a common XML based standard. This web-based application is implemented on a freely available open source software stack. This publication demonstrates the Dermatology Online Journal's use of the platform for non-biased independent open access publication.

  14. Results from the commissioning of a multi-modal endoscope for ultrasound and time of flight PET

    SciTech Connect

    Bugalho, Ricardo

    2015-07-01

    The EndoTOFPET-US collaboration has developed a multi-modal imaging system combining Ultrasound with Time-of-Flight Positron Emission Tomography into an endoscopic imaging device. The objective of the project is to obtain a coincidence time resolution of about 200 ps FWHM and to achieve about 1 mm spatial resolution of the PET system, while integrating all the components in a very compact detector suitable for endoscopic use. This scanner aims to be exploited for diagnostic and surgical oncology, as well as being instrumental in the clinical test of new biomarkers especially targeted for prostate and pancreatic cancer. (authors)

  15. Development and calibration of a microfluidic biofilm growth cell with flow-templating and multi-modal characterization.

    PubMed

    Paquet-Mercier, Francois; Karas, Adnane; Safdar, Muhammad; Aznaveh, Nahid Babaei; Zarabadi, Mirpouyan; Greener, Jesse

    2014-01-01

    We report the development of a microfluidic flow-templating platform with multi-modal characterization for studies of biofilms and their precursor materials. A key feature is a special three inlet flow-template compartment, which confines and controls the location of biofilm growth against a template wall. Characterization compartments include Raman imaging to study the localization of the nutrient solutions, optical microscopy to quantify biofilm biomass and localization, and cyclic voltammetry for flow velocity measurements. Each compartment is tested and then utilized to make preliminary measurements.

  16. Comparison of sleep-wake classification using electroencephalogram and wrist-worn multi-modal sensor data.

    PubMed

    Sano, Akane; Picard, Rosalind W

    2014-01-01

    This paper presents the comparison of sleep-wake classification using electroencephalogram (EEG) and multi-modal data from a wrist wearable sensor. We collected physiological data while participants were in bed: EEG, skin conductance (SC), skin temperature (ST), and acceleration (ACC) data, from 15 college students, computed the features and compared the intra-/inter-subject classification results. As results, EEG features showed 83% while features from a wrist wearable sensor showed 74% and the combination of ACC and ST played more important roles in sleep/wake classification.

  17. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  18. Free and open-source automated 3-D microscope.

    PubMed

    Wijnen, Bas; Petersen, Emily E; Hunt, Emily J; Pearce, Joshua M

    2016-11-01

    Open-source technology not only has facilitated the expansion of the greater research community, but by lowering costs it has encouraged innovation and customizable design. The field of automated microscopy has continued to be a challenge in accessibility due the expense and inflexible, noninterchangeable stages. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10 μm at the maximum speed and 27 μm at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. It is larger than all commercial alternatives, and is thus capable of high-depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3-9% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world.

  19. How Open Source Can Still Save the World

    NASA Astrophysics Data System (ADS)

    Behlendorf, Brian

    Many of the worlds’ major problems - economic distress, natural disaster responses, broken health care systems, education crises, and more - are not fundamentally information technology issues. However, in every case mentioned and more, there exist opportunities for Open Source software to uniquely change the way we can address these problems. At times this is about addressing a need for which no sufficient commercial market exists. For others, it is in the way Open Source licenses free the recipient from obligations to the creators, creating a relationship of mutual empowerment rather than one of dependency. For yet others, it is in the way the open collaborative processes that form around Open Source software provide a neutral ground for otherwise competitive parties to find a greatest common set of mutual needs to address together rather than in parallel. Several examples of such software exist today and are gaining traction. Governments, NGOs, and businesses are beginning to recognize the potential and are organizing to meet it. How far can this be taken?

  20. XMS: Cross-Platform Normalization Method for Multimodal Mass Spectrometric Tissue Profiling

    NASA Astrophysics Data System (ADS)

    Golf, Ottmar; Muirhead, Laura J.; Speller, Abigail; Balog, Júlia; Abbassi-Ghadi, Nima; Kumar, Sacheen; Mróz, Anna; Veselkov, Kirill; Takáts, Zoltán

    2015-01-01

    Here we present a proof of concept cross-platform normalization approach to convert raw mass spectra acquired by distinct desorption ionization methods and/or instrumental setups to cross-platform normalized analyte profiles. The initial step of the workflow is database driven peak annotation followed by summarization of peak intensities of different ions from the same molecule. The resulting compound-intensity spectra are adjusted to a method-independent intensity scale by using predetermined, compound-specific normalization factors. The method is based on the assumption that distinct MS-based platforms capture a similar set of chemical species in a biological sample, though these species may exhibit platform-specific molecular ion intensity distribution patterns. The method was validated on two sample sets of (1) porcine tissue analyzed by laser desorption ionization (LDI), desorption electrospray ionization (DESI), and rapid evaporative ionization mass spectrometric (REIMS) in combination with Fourier transformation-based mass spectrometry; and (2) healthy/cancerous colorectal tissue analyzed by DESI and REIMS with the latter being combined with time-of-flight mass spectrometry. We demonstrate the capacity of our method to reduce MS-platform specific variation resulting in (1) high inter-platform concordance coefficients of analyte intensities; (2) clear principal component based clustering of analyte profiles according to histological tissue types, irrespective of the used desorption ionization technique or mass spectrometer; and (3) accurate "blind" classification of histologic tissue types using cross-platform normalized analyte profiles.

  1. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications.

    PubMed

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-04-04

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models.

  2. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  3. Evaluation of Game Engines for Cross-Platform Development of Mobile Serious Games for Health.

    PubMed

    Kleinschmidt, Carina; Haag, Martin

    2016-01-01

    Studies have shown that serious games for health can improve patient compliance and help to increase the quality of medical education. Due to a growing availability of mobile devices, especially the development of cross-platform mobile apps is helpful for improving healthcare. As the development can be highly time-consuming and expensive, an alternative development process is needed. Game engines are expected to simplify this process. Therefore, this article examines the question whether using game engines for cross-platform serious games for health can simplify the development compared to the development of a plain HTML5 app. At first, a systematic review of the literature was conducted in different databases (MEDLINE, ACM and IEEE). Afterwards three different game engines were chosen, evaluated in different categories and compared to the development of a HTML5 app. This was realized by implementing a prototypical application in the different engines and conducting a utility analysis. The evaluation shows that the Marmalade engine is the best choice for development in this scenario. Furthermore, it is obvious that the game engines have great benefits against plain HTML5 development as they provide components for graphics, physics, sounds, etc. The authors recommend to use the Marmalade Engine for a cross-platform mobile Serious Game for Health.

  4. Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation.

    PubMed

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2014-04-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination processes. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6-8months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter.

  5. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  6. Differences in Multi-Modal Ultrasound Imaging between Triple Negative and Non-Triple Negative Breast Cancer.

    PubMed

    Li, Ziyao; Tian, Jiawei; Wang, Xiaowei; Wang, Ying; Wang, Zhenzhen; Zhang, Lei; Jing, Hui; Wu, Tong

    2016-04-01

    The objective of this study was to identify multi-modal ultrasound imaging parameters that could potentially help to differentiate between triple negative breast cancer (TNBC) and non-TNBC. Conventional ultrasonography, ultrasound strain elastography and 3-D ultrasound (3-D-US) findings from 50 TNBC and 179 non-TNBC patients were retrospectively reviewed. Immunohistochemical examination was used as the reference gold standard for cancer subtyping. Different ultrasound modalities were initially analyzed to define TNBC-related features. Subsequently, logistic regression analysis was applied to TNBC-related features to establish models for predicting TNBC. TNBCs often presented as micro-lobulated, markedly hypo-echoic masses with an abrupt interface (p = 0.015, 0.0015 and 0.004, compared with non-TNBCs, respectively) on conventional ultrasound, and showed a diminished retraction pattern phenomenon in the coronal plane (p = 0.035) on 3-D-US. Our findings suggest that B-mode ultrasound and 3-D-US in multi-modality ultrasonography could be a useful non-invasive technique for differentiating TNBCs from non-TNBCs.

  7. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

    NASA Astrophysics Data System (ADS)

    Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

    2015-03-01

    Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

  8. TH-C-12A-12: Veritas: An Open Source Tool to Facilitate User Interaction with TrueBeam Developer Mode

    SciTech Connect

    Mishra, P; Lewis, J; Etmektzoglou, T; Svatos, M

    2014-06-15

    Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slots mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments

  9. A Kernel for Open Source Drug Discovery in Tropical Diseases

    PubMed Central

    Ortí, Leticia; Carbajo, Rodrigo J.; Pieper, Ursula; Eswar, Narayanan; Maurer, Stephen M.; Rai, Arti K.; Taylor, Ginger; Todd, Matthew H.; Pineda-Lucena, Antonio; Sali, Andrej; Marti-Renom, Marc A.

    2009-01-01

    Background Conventional patent-based drug development incentives work badly for the developing world, where commercial markets are usually small to non-existent. For this reason, the past decade has seen extensive experimentation with alternative R&D institutions ranging from private–public partnerships to development prizes. Despite extensive discussion, however, one of the most promising avenues—open source drug discovery—has remained elusive. We argue that the stumbling block has been the absence of a critical mass of preexisting work that volunteers can improve through a series of granular contributions. Historically, open source software collaborations have almost never succeeded without such “kernels”. Methodology/Principal Findings Here, we use a computational pipeline for: (i) comparative structure modeling of target proteins, (ii) predicting the localization of ligand binding sites on their surfaces, and (iii) assessing the similarity of the predicted ligands to known drugs. Our kernel currently contains 143 and 297 protein targets from ten pathogen genomes that are predicted to bind a known drug or a molecule similar to a known drug, respectively. The kernel provides a source of potential drug targets and drug candidates around which an online open source community can nucleate. Using NMR spectroscopy, we have experimentally tested our predictions for two of these targets, confirming one and invalidating the other. Conclusions/Significance The TDI kernel, which is being offered under the Creative Commons attribution share-alike license for free and unrestricted use, can be accessed on the World Wide Web at http://www.tropicaldisease.org. We hope that the kernel will facilitate collaborative efforts towards the discovery of new drugs against parasites that cause tropical diseases. PMID:19381286

  10. MicMac GIS application: free open source

    NASA Astrophysics Data System (ADS)

    Duarte, L.; Moutinho, O.; Teodoro, A.

    2016-10-01

    The use of Remotely Piloted Aerial System (RPAS) for remote sensing applications is becoming more frequent as the technologies on on-board cameras and the platform itself are becoming a serious contender to satellite and airplane imagery. MicMac is a photogrammetric tool for image matching that can be used in different contexts. It is an open source software and it can be used as a command line or with a graphic interface (for each command). The main objective of this work was the integration of MicMac with QGIS, which is also an open source software, in order to create a new open source tool applied to photogrammetry/remote sensing. Python language was used to develop the application. This tool would be very useful in the manipulation and 3D modelling of a set of images. The main objective was to create a toolbar in QGIS with the basic functionalities with intuitive graphic interfaces. The toolbar is composed by three buttons: produce the points cloud, create the Digital Elevation Model (DEM) and produce the orthophoto of the study area. The application was tested considering 35 photos, a subset of images acquired by a RPAS in the Aguda beach area, Porto, Portugal. They were used in order to create a 3D terrain model and from this model obtain an orthophoto and the corresponding DEM. The code is open and can be modified according to the user requirements. This integration would be very useful in photogrammetry and remote sensing community combined with GIS capabilities.

  11. The ALPS Project: Open Source Software for Quantum Lattice Models

    NASA Astrophysics Data System (ADS)

    Trebst, Simon

    2004-03-01

    Algorithms for the simulation of strongly correlated quantum lattice models have matured and there is increasing demand for reliable simulation results both from theoreticians to test ideas and from experimental researchers as means of data analysis. Unlike in other fields there have been no "community codes" available, with the computational experts writing individual codes, adjusting them for specific needs of new projects and thereby investing weeks to months in software development for each project. We will present experiences with the ALPS collaboration, an open source effort aiming at simplifying the development of simulation codes for strongly correlated classical and quantum lattice models. It provides powerful but generic libraries and open-source application programs (such as classical and quantum Monte Carlo, exact diagonalization, DMRG, and others), intended also for non-experts. We will especially address three topics that are of relevance also to other similar efforts: license issues have been extensively discussed, especially concerning the scientific return of making source codes available to the community. The ALPS license is a compromise ensuring scientific return by requesting citations to the original authors of the codes while making sources openly available for future developments. The coordination of an international collaboration with researchers contributing from Austria, France, Germany, Japan and Switzerland by intense developer workshops on a semi-annual basis and annual user workshops is discussed. The situation for funding needed for such a joint open source development effort, which is often classified more as an infrastructure project and less as a research project, is also addressed. Work done with the ALPS collaboration initiated by M. Troyer (ETH) and S. Todo (Tokyo). For details and a list of members see http://alps.comp-phys.org/

  12. Nowcasting influenza outbreaks using open-source media report.

    SciTech Connect

    Ray, Jaideep; Brownstein, John S.

    2013-02-01

    We construct and verify a statistical method to nowcast influenza activity from a time-series of the frequency of reports concerning influenza related topics. Such reports are published electronically by both public health organizations as well as newspapers/media sources, and thus can be harvested easily via web crawlers. Since media reports are timely, whereas reports from public health organization are delayed by at least two weeks, using timely, open-source data to compensate for the lag in %E2%80%9Cofficial%E2%80%9D reports can be useful. We use morbidity data from networks of sentinel physicians (both the Center of Disease Control's ILINet and France's Sentinelles network) as the gold standard of influenza-like illness (ILI) activity. The time-series of media reports is obtained from HealthMap (http://healthmap.org). We find that the time-series of media reports shows some correlation ( 0.5) with ILI activity; further, this can be leveraged into an autoregressive moving average model with exogenous inputs (ARMAX model) to nowcast ILI activity. We find that the ARMAX models have more predictive skill compared to autoregressive (AR) models fitted to ILI data i.e., it is possible to exploit the information content in the open-source data. We also find that when the open-source data are non-informative, the ARMAX models reproduce the performance of AR models. The statistical models are tested on data from the 2009 swine-flu outbreak as well as the mild 2011-2012 influenza season in the U.S.A.

  13. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  14. Open-source, Rapid Reporting of Dementia Evaluations.

    PubMed

    Graves, Rasinio S; Mahnken, Jonathan D; Swerdlow, Russell H; Burns, Jeffrey M; Price, Cathy; Amstein, Brad; Hunt, Suzanne L; Brown, Lexi; Adagarla, Bhargav; Vidoni, Eric D

    2015-01-01

    The National Institutes of Health Alzheimer's Disease Center consortium requires member institutions to build and maintain a longitudinally characterized cohort with a uniform standard data set. Increasingly, centers are employing electronic data capture to acquire data at annual evaluations. In this paper, the University of Kansas Alzheimer's Disease Center reports on an open-source system of electronic data collection and reporting to improve efficiency. This Center capitalizes on the speed, flexibility and accessibility of the system to enhance the evaluation process while rapidly transferring data to the National Alzheimer's Coordinating Center. This framework holds promise for other consortia that regularly use and manage large, standardized datasets.

  15. Open-source, Rapid Reporting of Dementia Evaluations

    PubMed Central

    Graves, Rasinio S.; Mahnken, Jonathan D.; Swerdlow, Russell H.; Burns, Jeffrey M.; Price, Cathy; Amstein, Brad; Hunt, Suzanne L; Brown, Lexi; Adagarla, Bhargav; Vidoni, Eric D.

    2016-01-01

    The National Institutes of Health Alzheimer's Disease Center consortium requires member institutions to build and maintain a longitudinally characterized cohort with a uniform standard data set. Increasingly, centers are employing electronic data capture to acquire data at annual evaluations. In this paper, the University of Kansas Alzheimer's Disease Center reports on an open-source system of electronic data collection and reporting to improve efficiency. This Center capitalizes on the speed, flexibility and accessibility of the system to enhance the evaluation process while rapidly transferring data to the National Alzheimer's Coordinating Center. This framework holds promise for other consortia that regularly use and manage large, standardized datasets. PMID:26779306

  16. OpenStudio: An Open Source Integrated Analysis Platform; Preprint

    SciTech Connect

    Guglielmetti, R.; Macumber, D.; Long, N.

    2011-12-01

    High-performance buildings require an integrated design approach for all systems to work together optimally; systems integration needs to be incorporated in the earliest stages of design for efforts to be cost and energy-use effective. Building designers need a full-featured software framework to support rigorous, multidisciplinary building simulation. An open source framework - the OpenStudio Software Development Kit (SDK) - is being developed to address this need. In this paper, we discuss the needs that drive OpenStudio's system architecture and goals, provide a development status report (the SDK is currently in alpha release), and present a brief case study that illustrates its utility and flexibility.

  17. pyLIMA : an open source microlensing software

    NASA Astrophysics Data System (ADS)

    Bachelet, Etienne

    2017-01-01

    Planetary microlensing is a unique tool to detect cold planets around low-mass stars which is approaching a watershed in discoveries as near-future missions incorporate dedicated surveys. NASA and ESA have decided to complement WFIRST-AFTA and Euclid with microlensing programs to enrich our statistics about this planetary population. Of the nany challenges in- herent in these missions, the data analysis is of primary importance, yet is often perceived as time consuming, complex and daunting barrier to participation in the field. We present the first open source modeling software to conduct a microlensing analysis. This software is written in Python and use as much as possible existing packages.

  18. Open Source Next Generation Visualization Software for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Rinker, George

    2016-01-01

    Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).

  19. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  20. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases.

    PubMed

    Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%.

  1. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases

    PubMed Central

    Forbes, Jessica L.; Kim, Regina E. Y.; Paulsen, Jane S.; Johnson, Hans J.

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%. PMID:27536233

  2. An open source package for the IBA data format IDF

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.

    2014-08-01

    Ion Beam Analysis (IBA) codes and laboratories implement various formats to store the spectral data and to describe the experimental conditions and simulation or fit parameters. These various data formats are isolated applications and generally incompatible - they are unable to "talk" to each other. The need for a universal IBA data format (IDF) has been recognised for many years to allow easy transfer of data and simulation parameters between codes, as well as between experimentalists and data analysts. A new standard data format, IDF, which is transparent, universal, and includes the most common features desired by both experimentalists who collect and archive data and by users who analyse the data was previously presented. However, its actual implementation has been left to each individual software developer, and the sheer size of the full IDF definition has prevented widespread implementation, with only a few codes using the IDF. Open source software was now developed to implement the IDF, and made available to the community in http://idf.schemas.itn.pt/ both as source code and as a DLL that every code and lab can use to, finally, make data of different origins "talk" to each other. We report the main features of the open source IDF package developed.

  3. Open source projects in software engineering education: a mapping study

    NASA Astrophysics Data System (ADS)

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study aims to summarize the literature on how OSP have been used to facilitate students' learning of SE. Method: A systematic mapping study was undertaken by identifying, filtering and classifying primary studies using a predefined strategy. Results: 72 papers were selected and classified. The main results were: (a) most studies focused on comprehensive SE courses, although some dealt with specific areas; (b) the most prevalent approach was the traditional project method; (c) studies' general goals were: learning SE concepts and principles by using OSP, learning open source software or both; (d) most studies tried out ideas in regular courses within the curriculum; (e) in general, students had to work with predefined projects; (f) there was a balance between approaches where instructors had either inside control or no control on the activities performed by students; (g) when learning was assessed, software artefacts, reports and presentations were the main instruments used by teachers, while surveys were widely used for students' self-assessment; (h) most studies were published in the last seven years. Conclusions: The resulting map gives an overview of the existing initiatives in this context and shows gaps where further research can be pursued.

  4. Building integrated business environments: analysing open-source ESB

    NASA Astrophysics Data System (ADS)

    Martínez-Carreras, M. A.; García Jimenez, F. J.; Gómez Skarmeta, A. F.

    2015-05-01

    Integration and interoperability are two concepts that have gained significant prominence in the business field, providing tools which enable enterprise application integration (EAI). In this sense, enterprise service bus (ESB) has played a crucial role as the underpinning technology for creating integrated environments in which companies may connect all their legacy-applications. However, the potential of these technologies remains unknown and some important features are not used to develop suitable business environments. The aim of this paper is to describe and detail the elements for building the next generation of integrated business environments (IBE) and to analyse the features of ESBs as the core of this infrastructure. For this purpose, we evaluate how well-known open-source ESB products fulfil these needs. Moreover, we introduce a scenario in which the collaborative system 'Alfresco' is integrated in the business infrastructure. Finally, we provide a comparison of the different open-source ESBs available for IBE requirements. According to this study, Fuse ESB provides the best results, considering features such as support for a wide variety of standards and specifications, documentation and implementation, security, advanced business trends, ease of integration and performance.

  5. Building an Open Source Framework for Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Jagers, B.; Meijers, E.; Villars, M.

    2015-12-01

    In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.

  6. Hypersonic simulations using open-source CFD and DSMC solvers

    NASA Astrophysics Data System (ADS)

    Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.

    2016-11-01

    Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.

  7. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    NASA Astrophysics Data System (ADS)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  8. Clarity: An Open Source Manager for Laboratory Automation

    PubMed Central

    Delaney, Nigel F.; Echenique, José Rojas; Marx, Christopher J.

    2013-01-01

    Software to manage automated laboratories interfaces with hardware instruments, gives users a way to specify experimental protocols, and schedules activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity: a laboratory automation manager that is hardware agnostic, portable, extensible and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity; demonstrate an example of its implementation for the automated analysis of bacterial growth; and describe how the program can be extended to manage new hardware. Clarity is mature; well documented; actively developed; written in C# for the Common Language Infrastructure; and is free and open source software. These advantages set Clarity apart from currently available laboratory automation programs. PMID:23032169

  9. Clarity: an open-source manager for laboratory automation.

    PubMed

    Delaney, Nigel F; Rojas Echenique, José I; Marx, Christopher J

    2013-04-01

    Software to manage automated laboratories, when interfaced with hardware instruments, gives users a way to specify experimental protocols and schedule activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity, a laboratory automation manager that is hardware agnostic, portable, extensible, and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity, demonstrate an example of its implementation for the automated analysis of bacterial growth, and describe how the program can be extended to manage new hardware. Clarity is mature, well documented, actively developed, written in C# for the Common Language Infrastructure, and is free and open-source software. These advantages set Clarity apart from currently available laboratory automation programs. The source code and documentation for Clarity is available at http://code.google.com/p/osla/.

  10. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.

  11. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  12. Bayesian vector autoregressive model for multi-subject effective connectivity inference using multi-modal neuroimaging data.

    PubMed

    Chiang, Sharon; Guindani, Michele; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M; Vannucci, Marina

    2017-03-01

    In this article a multi-subject vector autoregressive (VAR) modeling approach was proposed for inference on effective connectivity based on resting-state functional MRI data. Their framework uses a Bayesian variable selection approach to allow for simultaneous inference on effective connectivity at both the subject- and group-level. Furthermore, it accounts for multi-modal data by integrating structural imaging information into the prior model, encouraging effective connectivity between structurally connected regions. They demonstrated through simulation studies that their approach resulted in improved inference on effective connectivity at both the subject- and group-level, compared with currently used methods. It was concluded by illustrating the method on temporal lobe epilepsy data, where resting-state functional MRI and structural MRI were used. Hum Brain Mapp 38:1311-1332, 2017. © 2016 Wiley Periodicals, Inc.

  13. Development of EndoTOFPET-US, a multi-modal endoscope for ultrasound and time of flight positron emission tomography

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.

    2014-02-01

    The EndoTOFPET-US project aims at delevoping a multi-modal imaging device that combines Ultrasound with Time-Of-Flight Positron Emission Tomography into an endoscopic imaging device. The goal is to obtain a coincidence time resolution of about 200 ps FWHM and sub-millimetric spatial resolution for the PET head, integrating the components in a very compact detector suitable for endoscopic use. The scanner will be exploited for the clinical test of new bio-markers especially targeted for prostate and pancreatic cancer as well as for diagnostic and surgical oncology. This paper focuses on the status of the Time-Of-Flight Positron Emission Tomograph under development for the EndoTOFPET-US project.

  14. Modeling most likely pathways for smuggling radioactive and special nuclear materials on a worldwide multi-modal transportation network

    SciTech Connect

    Saeger, Kevin J; Cuellar, Leticia

    2010-10-28

    Nuclear weapons proliferation is an existing and growing worldwide problem. To help with devising strategies and supporting decisions to interdict the transport of nuclear material, we developed the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) that provides an analytical approach for evaluating the probability that an adversary smuggling radioactive or special nuclear material will be detected during transit. We incorporate a global, multi-modal transportation network, explicit representation of designed and serendipitous detection opportunities, and multiple threat devices, material types, and shielding levels. This paper presents the general structure of PATRIOT, all focuses on the theoretical framework used to model the reliabilities of all network components that are used to predict the most likely pathways to the target.

  15. Inexpensive Open-Source Data Logging in the Field

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2013-12-01

    I present a general-purpose open-source field-capable data logger, which provides a mechanism to develop dense networks of inexpensive environmental sensors. This data logger was developed as a low-power variant of the Arduino open-source development system, and is named the ALog ("Arduino Logger") BottleLogger (it is slim enough to fit inside a Nalgene water bottle) version 1.0. It features an integrated high-precision real-time clock, SD card slot for high-volume data storage, and integrated power switching. The ALog can interface with sensors via six analog/digital pins, two digital pins, and one digital interrupt pin that can read event-based inputs, such as those from a tipping-bucket rain gauge. We have successfully tested the ALog BottleLogger with ultrasonic rangefinders (for water stage and snow accumulation and melt), temperature sensors, tipping-bucket rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that it triggers based on events. The source code for the ALog, including functions to interface with a a range of commercially-available sensors, is provided as an Arduino C++ library with example implementations. All schematics, circuit board layouts, and source code files are open-source and freely available under GNU GPL v3.0 and Creative Commons Attribution-ShareAlike 3.0 Unported licenses. Through this work, we hope to foster a community-driven movement to collect field environmental data on a budget that permits citizen-scientists and researchers from low-income countries to collect the same high-quality data as researchers in wealthy countries. These data can provide information about global change to managers, governments, scientists, and interested citizens worldwide. Watertight box with ALog BottleLogger data logger on the left and battery pack with 3 D cells on the right. Data can be collected for 3-5 years on one set of batteries.

  16. Physics and 3D in Flash Simulations: Open Source Reality

    NASA Astrophysics Data System (ADS)

    Harold, J. B.; Dusenbery, P.

    2009-12-01

    Over the last decade our ability to deliver simulations over the web has steadily advanced. The improvements in speed of the Adobe Flash engine, and the development of open source tools to expand it, allow us to deliver increasingly sophisticated simulation based games through the browser, with no additional downloads required. In this paper we will present activities we are developing as part of two asteroids education projects: Finding NEO (funded through NSF and NASA SMD), and Asteroids! (funded through NSF). The first activity is Rubble!, an asteroids deflection game built on the open source Box2D physics engine. This game challenges players to push asteroids in to safe orbits before they crash in to the Earth. The Box2D engine allows us to go well beyond simple 2-body orbital calculations and incorporate “rubble piles”. These objects, which are representative of many asteroids, are composed of 50 or more individual rocks which gravitationally bind and separate in realistic ways. Even bombs can be modeled with sufficient physical accuracy to convince players of the hazards of trying to “blow up” incoming asteroids. The ability to easily build games based on underlying physical models allows us to address physical misconceptions in a natural way: by having the player operate in a world that directly collides with those misconceptions. Rubble! provides a particularly compelling example of this due to the variety of well documented misconceptions regarding gravity. The second activity is a Light Curve challenge, which uses the open source PaperVision3D tools to analyze 3D asteroid models. The goal of this activity is to introduce the player to the concept of “light curves”, measurements of asteroid brightness over time which are used to calculate the asteroid’s period. These measurements can even be inverted to generate three dimensional models of asteroids that are otherwise too small and distant to directly image. Through the use of the Paper

  17. Improving Data Catalogs with Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Schweitzer, R.; Hankin, S.; O'Brien, K.

    2013-12-01

    The Global Earth Observation Integrated Data Environment (GEO-IDE) is NOAA's effort to successfully integrate data and information with partners in the national US-Global Earth Observation System (US-GEO) and the international Global Earth Observation System of Systems (GEOSS). As part of the GEO-IDE, the Unified Access Framework (UAF) is working to build momentum towards the goal of increased data integration and interoperability. The UAF project is moving towards this goal with an approach that includes leveraging well known and widely used standards, as well as free and open source software. The UAF project shares the widely held conviction that the use of data standards is a key ingredient necessary to achieve interoperability. Many community-based consensus standards fail, though, due to poor compliance. Compliance problems emerge for many reasons: because the standards evolve through versions, because documentation is ambiguous or because individual data providers find the standard inadequate as-is to meet their special needs. In addition, minimalist use of standards will lead to a compliant service, but one which is of low quality. In this presentation, we will be discussing the UAF effort to build a catalog cleaning tool which is designed to crawl THREDDS catalogs, analyze the data available, and then build a 'clean' catalog of data which is standards compliant and has a uniform set of data access services available. These data services include, among others, OPeNDAP, Web Coverage Service (WCS) and Web Mapping Service (WMS). We will also discuss how we are utilizing free and open source software and services to both crawl, analyze and build the clean data catalog, as well as our efforts to help data providers improve their data catalogs. We'll discuss the use of open source software such as DataNucleus, Thematic Realtime Environmental Distributed Data Services (THREDDS), ncISO and the netCDF Java Common Data Model (CDM). We'll also demonstrate how we are

  18. Flood hazard mapping using open source hydrological tools

    NASA Astrophysics Data System (ADS)

    Tollenaar, Daniel; Wensveen, Lex; Winsemius, Hessel; Schellekens, Jaap

    2014-05-01

    Commonly, flood hazard maps are produced by building detailed hydrological and hydraulic models. These models are forced and parameterized by locally available, high resolution and preferably high quality data. The models use a high spatio-temporal resolution, resulting in large computational effort. Also, many hydraulic packages that solve 1D (canal) and 2D (overland) shallow water equations, are not freeware nor open source. In this contribution, we evaluate whether simplified open source data and models can be used for a rapid flood hazard assessment and to highlight areas where more detail may be required. The validity of this approach is tested by using four combinations of open-source tools: (1) a global hydrological model (PCR-GLOBWB, Van Beek and Bierkens, 2009) with a static inundation routine (GLOFRIS, Winsemius et al. 2013); (2) a global hydrological model with a dynamic inundation model (Subgrid, Stelling, 2012); (3) a local hydrological model (WFLOW) with a static inundation routine; (4) and a local hydrological model with a dynamic inundation model. The applicability of tools is assessed on (1) accuracy to reproduce the phenomenon, (2) time for model setup and (3) computational time. The performance is tested in a case study in the Rio Mamoré, one of the tributaries of the Amazone River (230,000 km2). References: Stelling, G.S.: Quadtree flood simulations with sub-grid digital elevation models, Proceedings of the ICE - Water Management, Volume 165, Issue 10, 01 November 2012 , pages 567 -580 Winsemius, H. C., Van Beek, L. P. H., Jongman, B., Ward, P. J., and Bouwman, A.: A framework for global river flood risk assessments, Hydrol. Earth Syst. Sci. Discuss., 9, 9611-9659, doi:10.5194/hessd-9-9611-2012, 2012 Van Beek, L. P. H. and Bierkens, M. F. P.: The global hydrological model PCR-GLOBWB: conceptualization, parameterization and verification, Dept. of Physical Geography, Utrecht University, Utrecht, available at: http

  19. Distributed flow estimation and closed-loop control of an underwater vehicle with a multi-modal artificial lateral line.

    PubMed

    DeVries, Levi; Lagor, Francis D; Lei, Hong; Tan, Xiaobo; Paley, Derek A

    2015-03-25

    Bio-inspired sensing modalities enhance the ability of autonomous vehicles to characterize and respond to their environment. This paper concerns the lateral line of cartilaginous and bony fish, which is sensitive to fluid motion and allows fish to sense oncoming flow and the presence of walls or obstacles. The lateral line consists of two types of sensing modalities: canal neuromasts measure approximate pressure gradients, whereas superficial neuromasts measure local flow velocities. By employing an artificial lateral line, the performance of underwater sensing and navigation strategies is improved in dark, cluttered, or murky environments where traditional sensing modalities may be hindered. This paper presents estimation and control strategies enabling an airfoil-shaped unmanned underwater vehicle to assimilate measurements from a bio-inspired, multi-modal artificial lateral line and estimate flow properties for feedback control. We utilize potential flow theory to model the fluid flow past a foil in a uniform flow and in the presence of an upstream obstacle. We derive theoretically justified nonlinear estimation strategies to estimate the free stream flowspeed, angle of attack, and the relative position of an upstream obstacle. The feedback control strategy uses the estimated flow properties to execute bio-inspired behaviors including rheotaxis (the tendency of fish to orient upstream) and station-holding (the tendency of fish to position behind an upstream obstacle). A robotic prototype outfitted with a multi-modal artificial lateral line composed of ionic polymer metal composite and embedded pressure sensors experimentally demonstrates the distributed flow sensing and closed-loop control strategies.

  20. Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach.

    PubMed

    Mohammadi-Nejad, Ali-Reza; Hossein-Zadeh, Gholam-Ali; Soltanian-Zadeh, Hamid

    2017-03-14

    Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA; regularized CCA; and ssCCA and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (����= 34) and healthy control (HC) subjects (����= 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than 1×10 (-6) . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.

  1. VEGF-loaded graphene oxide as theranostics for multi-modality imaging-monitored targeting therapeutic angiogenesis of ischemic muscle

    NASA Astrophysics Data System (ADS)

    Sun, Zhongchan; Huang, Peng; Tong, Guang; Lin, Jing; Jin, Albert; Rong, Pengfei; Zhu, Lei; Nie, Liming; Niu, Gang; Cao, Feng; Chen, Xiaoyuan

    2013-07-01

    Herein we report the design and synthesis of multifunctional VEGF-loaded IR800-conjugated graphene oxide (GO-IR800-VEGF) for multi-modality imaging-monitored therapeutic angiogenesis of ischemic muscle. The as-prepared GO-IR800-VEGF positively targets VEGF receptors, maintains an elevated level of VEGF in ischemic tissues for a prolonged time, and finally leads to remarkable therapeutic angiogenesis of ischemic muscle. Although more efforts are required to further understand the in vivo behaviors and the long-term toxicology of GO, our work demonstrates the success of using GO for efficient VEGF delivery in vivo by intravenous administration and suggests the great promise of using graphene oxide in theranostic applications for treating ischemic disease.Herein we report the design and synthesis of multifunctional VEGF-loaded IR800-conjugated graphene oxide (GO-IR800-VEGF) for multi-modality imaging-monitored therapeutic angiogenesis of ischemic muscle. The as-prepared GO-IR800-VEGF positively targets VEGF receptors, maintains an elevated level of VEGF in ischemic tissues for a prolonged time, and finally leads to remarkable therapeutic angiogenesis of ischemic muscle. Although more efforts are required to further understand the in vivo behaviors and the long-term toxicology of GO, our work demonstrates the success of using GO for efficient VEGF delivery in vivo by intravenous administration and suggests the great promise of using graphene oxide in theranostic applications for treating ischemic disease. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr01573d

  2. Scalability of a cross-platform multi-threaded non-sequential optical ray tracer

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2011-10-01

    The GelOE optical engineering software implements multi-threaded ray tracing with just a few simple cross-platform OpenMP directives. Timings as a function of the number of threads are presented for two quite different ZEMAX non-sequential sample problems running on a dual-boot 12-core Apple computer and compared to not only ZEMAX but also FRED (plus single-threaded ASAP and CodeV). Also discussed are the relative merits of using Mac OSX or Windows 7, 32-bit or 64-bit mode, single or double precision floats, and the Intel or GCC compilers. It is found that simple cross-platform multi-threading can be more efficient than the Windows-specific kind used in the commercial codes and who's the fastest ray tracer depends on the specific problem. Note that besides ray trace speed, overall productivity also depends on other things like visualization, ease-of-use, documentation, and technical support of which none are rated here.

  3. The scheme and research of TV series multidimensional comprehensive evaluation on cross-platform

    NASA Astrophysics Data System (ADS)

    Chai, Jianping; Bai, Xuesong; Zhou, Hongjun; Yin, Fulian

    2016-10-01

    As for shortcomings of the comprehensive evaluation system on traditional TV programs such as single data source, ignorance of new media as well as the high time cost and difficulty of making surveys, a new evaluation of TV series is proposed in this paper, which has a perspective in cross-platform multidimensional evaluation after broadcasting. This scheme considers the data directly collected from cable television and the Internet as research objects. It's based on TOPSIS principle, after preprocessing and calculation of the data, they become primary indicators that reflect different profiles of the viewing of TV series. Then after the process of reasonable empowerment and summation by the six methods(PCA, AHP, etc.), the primary indicators form the composite indices on different channels or websites. The scheme avoids the inefficiency and difficulty of survey and marking; At the same time, it not only reflects different dimensions of viewing, but also combines TV media and new media, completing the multidimensional comprehensive evaluation of TV series on cross-platform.

  4. Spatial Information Processing: Standards-Based Open Source Visualization Technology

    NASA Astrophysics Data System (ADS)

    Hogan, P.

    2009-12-01

    . Spatial information intelligence is a global issue that will increasingly affect our ability to survive as a species. Collectively we must better appreciate the complex relationships that make life on Earth possible. Providing spatial information in its native context can accelerate our ability to process that information. To maximize this ability to process information, three basic elements are required: data delivery (server technology), data access (client technology), and data processing (information intelligence). NASA World Wind provides open source client and server technologies based on open standards. The possibilities for data processing and data sharing are enhanced by this inclusive infrastructure for geographic information. It is interesting that this open source and open standards approach, unfettered by proprietary constraints, simultaneously provides for entirely proprietary use of this same technology. 1. WHY WORLD WIND? NASA World Wind began as a single program with specific functionality, to deliver NASA content. But as the possibilities for virtual globe technology became more apparent, we found that while enabling a new class of information technology, we were also getting in the way. Researchers, developers and even users expressed their desire for World Wind functionality in ways that would service their specific needs. They want it in their web pages. They want to add their own features. They want to manage their own data. They told us that only with this kind of flexibility, could their objectives and the potential for this technology be truly realized. World Wind client technology is a set of development tools, a software development kit (SDK) that allows a software engineer to create applications requiring geographic visualization technology. 2. MODULAR COMPONENTRY Accelerated evolution of a technology requires that the essential elements of that technology be modular components such that each can advance independent of the other

  5. Management of Astronomical Software Projects with Open Source Tools

    NASA Astrophysics Data System (ADS)

    Briegel, F.; Bertram, T.; Berwein, J.; Kittmann, F.

    2010-12-01

    In this paper we will offer an innovative approach to managing the software development process with free open source tools, for building and automated testing, a system to automate the compile/test cycle on a variety of platforms to validate code changes, using virtualization to compile in parallel on various operating system platforms, version control and change management, enhanced wiki and issue tracking system for online documentation and reporting and groupware tools as they are: blog, discussion and calendar. Initially starting with the Linc-Nirvana instrument a new project and configuration management tool for developing astronomical software was looked for. After evaluation of various systems of this kind, we are satisfied with the selection we are using now. Following the lead of Linc-Nirvana most of the other software projects at the MPIA are using it now.

  6. GRASS GIS: The first Open Source Temporal GIS

    NASA Astrophysics Data System (ADS)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management

  7. Open source cardiology electronic health record development for DIGICARDIAC implementation

    NASA Astrophysics Data System (ADS)

    Dugarte, Nelson; Medina, Rubén.; Huiracocha, Lourdes; Rojas, Rubén.

    2015-12-01

    This article presents the development of a Cardiology Electronic Health Record (CEHR) system. Software consists of a structured algorithm designed under Health Level-7 (HL7) international standards. Novelty of the system is the integration of high resolution ECG (HRECG) signal acquisition and processing tools, patient information management tools and telecardiology tools. Acquisition tools are for management and control of the DIGICARDIAC electrocardiograph functions. Processing tools allow management of HRECG signal analysis searching for indicative patterns of cardiovascular pathologies. Telecardiology tools incorporation allows system communication with other health care centers decreasing access time to the patient information. CEHR system was completely developed using open source software. Preliminary results of process validation showed the system efficiency.

  8. Challenges of the Open Source Component Marketplace in the Industry

    NASA Astrophysics Data System (ADS)

    Ayala, Claudia; Hauge, Øyvind; Conradi, Reidar; Franch, Xavier; Li, Jingyue; Velle, Ketil Sandanger

    The reuse of Open Source Software components available on the Internet is playing a major role in the development of Component Based Software Systems. Nevertheless, the special nature of the OSS marketplace has taken the “classical” concept of software reuse based on centralized repositories to a completely different arena based on massive reuse over Internet. In this paper we provide an overview of the actual state of the OSS marketplace, and report preliminary findings about how companies interact with this marketplace to reuse OSS components. Such data was gathered from interviews in software companies in Spain and Norway. Based on these results we identify some challenges aimed to improve the industrial reuse of OSS components.

  9. Conceptual Architecture of Building Energy Management Open Source Software (BEMOSS)

    SciTech Connect

    Khamphanchai, Warodom; Saha, Avijit; Rathinavel, Kruthika; Kuzlu, Murat; Pipattanasomporn, Manisa; Rahman, Saifur; Akyol, Bora A.; Haack, Jereme N.

    2014-12-01

    The objective of this paper is to present a conceptual architecture of a Building Energy Management Open Source Software (BEMOSS) platform. The proposed BEMOSS platform is expected to improve sensing and control of equipment in small- and medium-sized buildings, reduce energy consumption and help implement demand response (DR). It aims to offer: scalability, robustness, plug and play, open protocol, interoperability, cost-effectiveness, as well as local and remote monitoring. In this paper, four essential layers of BEMOSS software architecture -- namely User Interface, Application and Data Management, Operating System and Framework, and Connectivity layers -- are presented. A laboratory test bed to demonstrate the functionality of BEMOSS located at the Advanced Research Institute of Virginia Tech is also briefly described.

  10. Open-source products for a lighting experiment device.

    PubMed

    Gildea, Kevin M; Milburn, Nelda

    2014-12-01

    The capabilities of open-source software and microcontrollers were used to construct a device for controlled lighting experiments. The device was designed to ascertain whether individuals with certain color vision deficiencies were able to discriminate between the red and white lights in fielded systems on the basis of luminous intensity. The device provided the ability to control the timing and duration of light-emitting diode (LED) and incandescent light stimulus presentations, to present the experimental sequence and verbal instructions automatically, to adjust LED and incandescent luminous intensity, and to display LED and incandescent lights with various spectral emissions. The lighting device could easily be adapted for experiments involving flashing or timed presentations of colored lights, or the components could be expanded to study areas such as threshold light perception and visual alerting systems.

  11. The Pixhawk Open-Source Computer Vision Framework for Mavs

    NASA Astrophysics Data System (ADS)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  12. Open-Source Software in Computational Research: A Case Study

    DOE PAGES

    Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; ...

    2008-01-01

    A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

  13. IP address management : augmenting Sandia's capabilities through open source tools.

    SciTech Connect

    Nayar, R. Daniel

    2005-08-01

    Internet Protocol (IP) address management is an increasingly growing concern at Sandia National Laboratories (SNL) and the networking community as a whole. The current state of the available IP addresses indicates that they are nearly exhausted. Currently SNL doesn't have the justification to obtain more IP address space from Internet Assigned Numbers Authority (IANA). There must exist a local entity to manage and allocate IP assignments efficiently. Ongoing efforts at Sandia have been in the form of a multifunctional database application notably known as Network Information System (NWIS). NWIS is a database responsible for a multitude of network administrative services including IP address management. This study will explore the feasibility of augmenting NWIS's IP management capabilities utilizing open source tools. Modifications of existing capabilities to better allocate available IP address space are studied.

  14. Modular Open-Source Software for Item Factor Analysis

    PubMed Central

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven

    2015-01-01

    This paper introduces an Item Factor Analysis (IFA) module for OpenMx, a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation and manipulation of models. Modular organization of the source code facilitates the easy addition of item models, item parameter estimation algorithms, optimizers, test scoring algorithms, and fit diagnostics all within an integrated framework. Three short example scripts are presented for fitting item parameters, latent distribution parameters, and a multiple group model. The availability of both IFA and structural equation modeling in the same software is a step toward the unification of these two methodologies. PMID:27065479

  15. Modular Open-Source Software for Item Factor Analysis.

    PubMed

    Pritikin, Joshua N; Hunter, Micheal D; Boker, Steven

    2015-06-01

    This paper introduces an Item Factor Analysis (IFA) module for OpenMx, a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation and manipulation of models. Modular organization of the source code facilitates the easy addition of item models, item parameter estimation algorithms, optimizers, test scoring algorithms, and fit diagnostics all within an integrated framework. Three short example scripts are presented for fitting item parameters, latent distribution parameters, and a multiple group model. The availability of both IFA and structural equation modeling in the same software is a step toward the unification of these two methodologies.

  16. Introducing djatoka: a reuse friendly, open source JPEG image server

    SciTech Connect

    Chute, Ryan M; Van De Sompel, Herbert

    2008-01-01

    The ISO-standardized JPEG 2000 image format has started to attract significant attention. Support for the format is emerging in major consumer applications, and the cultural heritage community seriously considers it a viable format for digital preservation. So far, only commercial image servers with JPEG 2000 support have been available. They come with significant license fees and typically provide the customers with limited extensibility capabilities. Here, we introduce djatoka, an open source JPEG 2000 image server with an attractive basic feature set, and extensibility under control of the community of implementers. We describe djatoka, and point at demonstrations that feature digitized images of marvelous historical manuscripts from the collections of the British Library and the University of Ghent. We also caIl upon the community to engage in further development of djatoka.

  17. Open-Source Java for Teaching Computational Physics

    NASA Astrophysics Data System (ADS)

    Wolfgang, Christian; Gould, Harvey; Gould, Joshua; Tobochnik, Jan

    2001-11-01

    The switch from procedural to object-oriented (OO) programming has produced dramatic changes in professional software design. OO techniques have not, however, been widely adopted in computational physics. Although most physicists are familiar with procedural languages such as Fortran, few physicists have formal training in computer science and few therefore have made the switch to OO programming. The continued use of procedural languages in education is due, in part, to the lack of up-to-date curricular materials that combine current computational physics research topics with an OO framework. This talk describes an Open-Source curriculum development project to produce such material. Examples will be presented that show how OO techniques can be used to encapsulate the relevant Physics, the analysis, and the associated numerical methods.

  18. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  19. Open Source GIS Connectors to NASA GES DISC Satellite Data

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Pham, Long; Yang, Wenli

    2014-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) houses a suite of high spatiotemporal resolution GIS data including satellite-derived and modeled precipitation, air quality, and land surface parameter data. The data are valuable to various GIS research and applications at regional, continental, and global scales. On the other hand, many GIS users, especially those from the ArcGIS community, have difficulties in obtaining, importing, and using our data due to factors such as the variety of data products, the complexity of satellite remote sensing data, and the data encoding formats. We introduce a simple open source ArcGIS data connector that significantly simplifies the access and use of GES DISC data in ArcGIS.

  20. Integrating HCI Specialists into Open Source Software Development Projects

    NASA Astrophysics Data System (ADS)

    Hedberg, Henrik; Iivari, Netta

    Typical open source software (OSS) development projects are organized around technically talented developers, whose communication is based on technical aspects and source code. Decision-making power is gained through proven competence and activity in the project, and non-technical end-user opinions are too many times neglected. In addition, also human-computer interaction (HCI) specialists have encountered difficulties in trying to participate in OSS projects, because there seems to be no clear authority and responsibility for them. In this paper, based on HCI and OSS literature, we introduce an extended OSS development project organization model that adds a new level of communication and roles for attending human aspects of software. The proposed model makes the existence of HCI specialists visible in the projects, and promotes interaction between developers and the HCI specialists in the course of a project.

  1. An open source mobile platform for psychophysiological self tracking.

    PubMed

    Gaggioli, Andrea; Cipresso, Pietro; Serino, Silvia; Pioggia, Giovanni; Tartarisco, Gennaro; Baldus, Giovanni; Corda, Daniele; Riva, Giuseppe

    2012-01-01

    Self tracking is a recent trend in e-health that refers to the collection, elaboration and visualization of personal health data through ubiquitous computing tools such as mobile devices and wearable sensors. Here, we describe the design of a mobile self-tracking platform that has been specifically designed for clinical and research applications in the field of mental health. The smartphone-based application allows collecting a) self-reported feelings and activities from pre-programmed questionnaires; b) electrocardiographic (ECG) data from a wireless sensor platform worn by the user; c) movement activity information obtained from a tri-axis accelerometer embedded in the wearable platform. Physiological signals are further processed by the application and stored on the smartphone's memory. The mobile data collection platform is free and released under an open source licence to allow wider adoption by the research community (download at: http://sourceforge.net/projects/psychlog/).

  2. Implementing Open Source Platform for Education Quality Enhancement in Primary Education: Indonesia Experience

    ERIC Educational Resources Information Center

    Kisworo, Marsudi Wahyu

    2016-01-01

    Information and Communication Technology (ICT)-supported learning using free and open source platform draws little attention as open source initiatives were focused in secondary or tertiary educations. This study investigates possibilities of ICT-supported learning using open source platform for primary educations. The data of this study is taken…

  3. Open Access, Open Source and Digital Libraries: A Current Trend in University Libraries around the World

    ERIC Educational Resources Information Center

    Krishnamurthy, M.

    2008-01-01

    Purpose: The purpose of this paper is to describe the open access and open source movement in the digital library world. Design/methodology/approach: A review of key developments in the open access and open source movement is provided. Findings: Open source software and open access to research findings are of great use to scholars in developing…

  4. JSim, an open-source modeling system for data analysis

    PubMed Central

    Bassingthwaighte, James B.

    2013-01-01

    JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it

  5. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  6. The Future of ECHO: Evaluating Open Source Possibilities

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.

  7. Comparative Analysis Study of Open Source GIS in Malaysia

    NASA Astrophysics Data System (ADS)

    Rasid, Muhammad Zamir Abdul; Kamis, Naddia; Khuizham Abd Halim, Mohd

    2014-06-01

    Open source origin might appear like a major prospective change which is qualified to deliver in various industries and also competing means in developing countries. The leading purpose of this research study is to basically discover the degree of adopting Open Source Software (OSS) that is connected with Geographic Information System (GIS) application within Malaysia. It was derived based on inadequate awareness with regards to the origin ideas or even on account of techie deficiencies in the open origin instruments. This particular research has been carried out based on two significant stages; the first stage involved a survey questionnaire: to evaluate the awareness and acceptance level based on the comparison feedback regarding OSS and commercial GIS. This particular survey was conducted among three groups of candidates: government servant, university students and lecturers, as well as individual. The approaches of measuring awareness in this research were based on a comprehending signal plus a notion signal for each survey questions. These kinds of signs had been designed throughout the analysis in order to supply a measurable and also a descriptive signal to produce the final result. The second stage involved an interview session with a major organization that carries out available origin internet GIS; the Federal Department of Town and Country Planning Peninsular Malaysia (JPBD). The impact of this preliminary study was to understand the particular viewpoint of different groups of people on the available origin, and also their insufficient awareness with regards to origin ideas as well as likelihood may be significant root of adopting level connected with available origin options.

  8. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  9. The Case for Open Source: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    Open source has continued to evolve and in the past three years the development of a graphical user interface has made it increasingly accessible and viable for end users without special training. Open source relies to a great extent on the free software movement. In this context, the term free refers not to cost, but to the freedom users have to…

  10. Choosing Open Source ERP Systems: What Reasons Are There For Doing So?

    NASA Astrophysics Data System (ADS)

    Johansson, Björn; Sudzina, Frantisek

    Enterprise resource planning (ERP) systems attract a high attention and open source software does it as well. The question is then if, and if so, when do open source ERP systems take off. The paper describes the status of open source ERP systems. Based on literature review of ERP system selection criteria based on Web of Science articles, it discusses reported reasons for choosing open source or proprietary ERP systems. Last but not least, the article presents some conclusions that could act as input for future research. The paper aims at building up a foundation for the basic question: What are the reasons for an organization to adopt open source ERP systems.

  11. A step-wise approach to define binding mechanisms of surrogate viral particles to multi-modal anion exchange resin in a single solute system.

    PubMed

    Brown, Matthew R; Johnson, Sarah A; Brorson, Kurt A; Lute, Scott C; Roush, David J

    2017-01-21

    Multi-modal anion exchange resins combine properties of both anion exchange and hydrophobic interaction chromatography for commercial protein polishing and may provide some viral clearance as well. From a regulatory viral clearance claim standpoint, it is unclear if multi-modal resins are truly orthogonal to either single-mode anion exchange or hydrophobic interaction columns. To answer this, a strategy of solute surface assays and High Throughput Screening of resin in concert with a scale-down model of large scale chromatography purification was employed to determine the predominant binding mechanisms of a panel of bacteriophage (i.e., PR772, PP7, and ϕX174) to multi-modal and single mode resins under various buffer conditions. The buffer conditions were restricted to buffer environments suggested by the manufacturer for the multi-modal resin. Each phage was examined for estimated net charge expression and relative hydrophobicity using chromatographic based methods. Overall, PP7 and PR772 bound to the multimodal resin via both anionic and hydrophobic moieties, while ϕX174 bound predominantly by the anionic moiety. Biotechnol. Bioeng. 2017;9999: 1-8. © 2017 Wiley Periodicals, Inc.

  12. VSEARCH: a versatile open source tool for metagenomics

    PubMed Central

    Flouri, Tomáš; Nichols, Ben; Quince, Christopher; Mahé, Frédéric

    2016-01-01

    Background VSEARCH is an open source and free of charge multithreaded 64-bit tool for processing and preparing metagenomics, genomics and population genomics nucleotide sequence data. It is designed as an alternative to the widely used USEARCH tool (Edgar, 2010) for which the source code is not publicly available, algorithm details are only rudimentarily described, and only a memory-confined 32-bit version is freely available for academic use. Methods When searching nucleotide sequences, VSEARCH uses a fast heuristic based on words shared by the query and target sequences in order to quickly identify similar sequences, a similar strategy is probably used in USEARCH. VSEARCH then performs optimal global sequence alignment of the query against potential target sequences, using full dynamic programming instead of the seed-and-extend heuristic used by USEARCH. Pairwise alignments are computed in parallel using vectorisation and multiple threads. Results VSEARCH includes most commands for analysing nucleotide sequences available in USEARCH version 7 and several of those available in USEARCH version 8, including searching (exact or based on global alignment), clustering by similarity (using length pre-sorting, abundance pre-sorting or a user-defined order), chimera detection (reference-based or de novo), dereplication (full length or prefix), pairwise alignment, reverse complementation, sorting, and subsampling. VSEARCH also includes commands for FASTQ file processing, i.e., format detection, filtering, read quality statistics, and merging of paired reads. Furthermore, VSEARCH extends functionality with several new commands and improvements, including shuffling, rereplication, masking of low-complexity sequences with the well-known DUST algorithm, a choice among different similarity definitions, and FASTQ file format conversion. VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par

  13. An Open Source modular platform for hydrological model implementation

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not

  14. HELIOS: A new open-source radiative transfer code

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin

    2015-12-01

    I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net

  15. Open source software engineering for geoscientific modeling applications

    NASA Astrophysics Data System (ADS)

    Bilke, L.; Rink, K.; Fischer, T.; Kolditz, O.

    2012-12-01

    OpenGeoSys (OGS) is a scientific open source project for numerical simulation of thermo-hydro-mechanical-chemical (THMC) processes in porous and fractured media. The OGS software development community is distributed all over the world and people with different backgrounds are contributing code to a complex software system. The following points have to be addressed for successful software development: - Platform independent code - A unified build system - A version control system - A collaborative project web site - Continuous builds and testing - Providing binaries and documentation for end users OGS should run on a PC as well as on a computing cluster regardless of the operating system. Therefore the code should not include any platform specific feature or library. Instead open source and platform independent libraries like Qt for the graphical user interface or VTK for visualization algorithms are used. A source code management and version control system is a definite requirement for distributed software development. For this purpose Git is used, which enables developers to work on separate versions (branches) of the software and to merge those versions at some point to the official one. The version control system is integrated into an information and collaboration website based on a wiki system. The wiki is used for collecting information such as tutorials, application examples and case studies. Discussions take place in the OGS mailing list. To improve code stability and to verify code correctness a continuous build and testing system, based on the Jenkins Continuous Integration Server, has been established. This server is connected to the version control system and does the following on every code change: - Compiles (builds) the code on every supported platform (Linux, Windows, MacOS) - Runs a comprehensive test suite of over 120 benchmarks and verifies the results Runs software development related metrics on the code (like compiler warnings, code complexity

  16. Cross-platform, multi-language libraries for ionization and surface interaction effects in plasmas

    NASA Astrophysics Data System (ADS)

    Stoltz, Peter; Sides, Scott; Sizemore, Nate; Veitzer, Seth; Furman, Miguel; Vay, Jean-Luc

    2006-10-01

    We are developing a library of numerical algorithms for modeling plasma effects such as ionization, secondary electron production, and ion-surface interaction. The goal is to make this library accessible to a large number of researchers by making it available on multiple computing platforms (Linux, Windows, Mac OS X) and available in multiple computing languages (Fortran, C, Python, Java). We discuss our use of the GNU autotools and the Babel utility to accomplish this cross-platform, multi-language interface. We then discuss application of this library within the WARP particle-in-cell code for modeling effects of ion-induced electrons in the High Current Experiment and within the VORPAL particle-in-cell code for modeling kinetic effects in hollow cathode discharges.

  17. PyEPL: a cross-platform experiment-programming library.

    PubMed

    Geller, Aaron S; Schlefer, Ian K; Sederberg, Per B; Jacobs, Joshua; Kahana, Michael J

    2007-11-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments forspatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL.

  18. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2014-05-30

    Figure 3. Several key resources have been put in place for the projects: • Community website dedicated to Open Chemistry projects • Git source code ...an open source project for the Android operating system, enables online review of code submissions from anyone while retaining control of what code is...chemistry community. The three Open Chemistry appli- cations (MongoChem, MoleQueue, and Avogadro 2) are available in both source and binary form for

  19. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience.

  20. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes

    PubMed Central

    Warnat, Patrick; Eils, Roland; Brors, Benedikt

    2005-01-01

    Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85%) were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and microarray technologies

  1. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  2. An Open Source Platform for Earth Science Research and Applications

    NASA Astrophysics Data System (ADS)

    Hiatt, S. H.; Ganguly, S.; Melton, F. S.; Michaelis, A.; Milesi, C.; Nemani, R. R.; Votava, P.; Wang, W.; Zhang, G.; Nasa Ecological Forecasting Lab

    2010-12-01

    The Terrestrial Observation and Prediction System (TOPS) at NASA-ARC's Ecological Forecasting Lab produces a suite of gridded data products in near real-time that are designed to enhance management decisions related to various environmental phenomenon, as well as to advance scientific understanding of these ecosystem processes. While these data hold tremendous potential value for a wide range of disciplines, the large nature of these datasets presents challenges in their analysis and distribution. Additionally, remote sensing data and their derivative ecological models rely on quality ground-based observations for evaluating and validating model outputs. The Ecological Forecasting Lab addresses these challenges by developing a web-based data gateway, leveraging a completely open source software stack. TOPS data is organized and made accessible via an OPeNDAP server. Toolkits such as GDAL and Matplotlib are used within a Python web server to generate dynamic views of TOPS data that can be incorporated into web applictions, providing a simple interface for visualizing spatial and/or temporal trends. In order to facilitate collection of ground observations for validating and enhancing ecological models, we have implemented a web portal that allows volunteers to visualize current ecological conditions and to submit their observations. Initially we use this system to assist research related to plant phenology, but we plan to extend the system to support other areas of research as well.

  3. What makes computational open source software libraries successful?

    NASA Astrophysics Data System (ADS)

    Bangerth, Wolfgang; Heister, Timo

    2013-01-01

    Software is the backbone of scientific computing. Yet, while we regularly publish detailed accounts about the results of scientific software, and while there is a general sense of which numerical methods work well, our community is largely unaware of best practices in writing the large-scale, open source scientific software upon which our discipline rests. This is particularly apparent in the commonly held view that writing successful software packages is largely the result of simply ‘being a good programmer’ when in fact there are many other factors involved, for example the social skill of community building. In this paper, we consider what we have found to be the necessary ingredients for successful scientific software projects and, in particular, for software libraries upon which the vast majority of scientific codes are built today. In particular, we discuss the roles of code, documentation, communities, project management and licenses. We also briefly comment on the impact on academic careers of engaging in software projects.

  4. Dinosaur: A Refined Open-Source Peptide MS Feature Detector

    PubMed Central

    2016-01-01

    In bottom-up mass spectrometry (MS)-based proteomics, peptide isotopic and chromatographic traces (features) are frequently used for label-free quantification in data-dependent acquisition MS but can also be used for the improved identification of chimeric spectra or sample complexity characterization. Feature detection is difficult because of the high complexity of MS proteomics data from biological samples, which frequently causes features to intermingle. In addition, existing feature detection algorithms commonly suffer from compatibility issues, long computation times, or poor performance on high-resolution data. Because of these limitations, we developed a new tool, Dinosaur, with increased speed and versatility. Dinosaur has the functionality to sample algorithm computations through quality-control plots, which we call a plot trail. From the evaluation of this plot trail, we introduce several algorithmic improvements to further improve the robustness and performance of Dinosaur, with the detection of features for 98% of MS/MS identifications in a benchmark data set, and no other algorithm tested in this study passed 96% feature detection. We finally used Dinosaur to reimplement a published workflow for peptide identification in chimeric spectra, increasing chimeric identification from 26% to 32% over the standard workflow. Dinosaur is operating-system-independent and is freely available as open source on https://github.com/fickludd/dinosaur. PMID:27224449

  5. Nektar++: An open-source spectral/ hp element framework

    NASA Astrophysics Data System (ADS)

    Cantwell, C. D.; Moxey, D.; Comerford, A.; Bolis, A.; Rocco, G.; Mengaldo, G.; De Grazia, D.; Yakovlev, S.; Lombard, J.-E.; Ekelschot, D.; Jordi, B.; Xu, H.; Mohamied, Y.; Eskilsson, C.; Nelson, B.; Vos, P.; Biotto, C.; Kirby, R. M.; Sherwin, S. J.

    2015-07-01

    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/ hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy over low-order techniques at reduced computational cost for a given number of degrees of freedom. However, their proliferation is often limited by their complexity, which makes these methods challenging to implement and use. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectral/ hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.

  6. Open-source telemedicine platform for wireless medical video communication.

    PubMed

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  7. An open source device for operant licking in rats

    PubMed Central

    Longley, Matthew; Willis, Ethan L.; Tay, Cindy X.

    2017-01-01

    We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior. PMID:28229020

  8. Special population planner 4 : an open source release.

    SciTech Connect

    Kuiper, J.; Metz, W.; Tanzman, E.

    2008-01-01

    Emergencies like Hurricane Katrina and the recent California wildfires underscore the critical need to meet the complex challenge of planning for individuals with special needs and for institutionalized special populations. People with special needs and special populations often have difficulty responding to emergencies or taking protective actions, and emergency responders may be unaware of their existence and situations during a crisis. Special Population Planner (SPP) is an ArcGIS-based emergency planning system released as an open source product. SPP provides for easy production of maps, reports, and analyses to develop and revise emergency response plans. It includes tools to manage a voluntary registry of data for people with special needs, integrated links to plans and documents, tools for response planning and analysis, preformatted reports and maps, and data on locations of special populations, facility and resource characteristics, and contacts. The system can be readily adapted for new settings without programming and is broadly applicable. Full documentation and a demonstration database are included in the release.

  9. Open-Source Photometric System for Enzymatic Nitrate Quantification

    PubMed Central

    Wittbrodt, B. T.; Squires, D. A.; Walbeck, J.; Campbell, E.; Campbell, W. H.; Pearce, J. M.

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique. PMID:26244342

  10. Assesing Ecohydrological Impacts of Forest Disturbance using Open Source Software

    NASA Astrophysics Data System (ADS)

    Lovette, J. P.; Chang, T.; Treglia, M.; Gan, T.; Duncan, J.

    2014-12-01

    In the past 30 years, land management protocols, climate change, and land use have radically changed the frequency and magnitudes of disturbance regimes. Landscape scale disturbances can change a forest structure, resulting in impacts on adjacent watersheds that may affect water amount/quality for human and natural resource use. Our project quantifies hydrologic changes from of a suite of disturbance events resulting in vegetation cover shifts at watersheds across the continental United States. These disturbance events include: wildfire, insect/disease, deforestation(logging), hurricanes, ice storms, and human land use. Our major question is: Can the effects of disturbance on ecohydrology be generalized across regions, time scales, and spatial scales? Using a workflow of open source tools, and utilizing publicly available data, this work could be extended and leveraged by other researchers. Spatial data on disturbance include the MODIS Global Disturbance Index (NTSG), Landsat 7 Global Forest Change (Hansen dataset), and the Degree of Human Modification (Theobald dataset). Ecohydrologic response data includes USGS NWIS, USFS-LTER climDB/hydroDB, and the CUAHSI HIS.

  11. Gadgetron: an open source framework for medical image reconstruction.

    PubMed

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-06-01

    This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.

  12. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  13. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  14. An Open Source Simulation Model for Soil and Sediment Bioturbation

    PubMed Central

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach. PMID:22162997

  15. MetaTrans: an open-source pipeline for metatranscriptomics.

    PubMed

    Martinez, Xavier; Pozuelo, Marta; Pascal, Victoria; Campos, David; Gut, Ivo; Gut, Marta; Azpiroz, Fernando; Guarner, Francisco; Manichanh, Chaysavanh

    2016-05-23

    To date, meta-omic approaches use high-throughput sequencing technologies, which produce a huge amount of data, thus challenging modern computers. Here we present MetaTrans, an efficient open-source pipeline to analyze the structure and functions of active microbial communities using the power of multi-threading computers. The pipeline is designed to perform two types of RNA-Seq analyses: taxonomic and gene expression. It performs quality-control assessment, rRNA removal, maps reads against functional databases and also handles differential gene expression analysis. Its efficacy was validated by analyzing data from synthetic mock communities, data from a previous study and data generated from twelve human fecal samples. Compared to an existing web application server, MetaTrans shows more efficiency in terms of runtime (around 2 hours per million of transcripts) and presents adapted tools to compare gene expression levels. It has been tested with a human gut microbiome database but also proposes an option to use a general database in order to analyze other ecosystems. For the installation and use of the pipeline, we provide a detailed guide at the following website (www.metatrans.org).

  16. MetaTrans: an open-source pipeline for metatranscriptomics

    PubMed Central

    Martinez, Xavier; Pozuelo, Marta; Pascal, Victoria; Campos, David; Gut, Ivo; Gut, Marta; Azpiroz, Fernando; Guarner, Francisco; Manichanh, Chaysavanh

    2016-01-01

    To date, meta-omic approaches use high-throughput sequencing technologies, which produce a huge amount of data, thus challenging modern computers. Here we present MetaTrans, an efficient open-source pipeline to analyze the structure and functions of active microbial communities using the power of multi-threading computers. The pipeline is designed to perform two types of RNA-Seq analyses: taxonomic and gene expression. It performs quality-control assessment, rRNA removal, maps reads against functional databases and also handles differential gene expression analysis. Its efficacy was validated by analyzing data from synthetic mock communities, data from a previous study and data generated from twelve human fecal samples. Compared to an existing web application server, MetaTrans shows more efficiency in terms of runtime (around 2 hours per million of transcripts) and presents adapted tools to compare gene expression levels. It has been tested with a human gut microbiome database but also proposes an option to use a general database in order to analyze other ecosystems. For the installation and use of the pipeline, we provide a detailed guide at the following website (www.metatrans.org). PMID:27211518

  17. Open-Source Software for Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Oyafuso, Fabiano; Hua, Hook; Tisdale, Edwin; Hart, Don

    2004-01-01

    The Nanoelectronic Modeling 3-D (NEMO 3-D) computer program has been upgraded to open-source status through elimination of license-restricted components. The present version functions equivalently to the version reported in "Software for Numerical Modeling of Nanoelectronic Devices" (NPO-30520), NASA Tech Briefs, Vol. 27, No. 11 (November 2003), page 37. To recapitulate: NEMO 3-D performs numerical modeling of the electronic transport and structural properties of a semiconductor device that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantum-mechanical behavior of the device resolved to the atomistic level of granularity. NEMO 3-D solves the applicable quantum matrix equation on a Beowulf-class cluster computer by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. A prior upgrade of NEMO 3-D incorporated a capability for a strain treatment, parameterized for bulk material properties of GaAs and InAs, for two tight-binding submodels. NEMO 3-D has been demonstrated in atomistic analyses of effects of disorder in alloys and, in particular, in bulk In(x)Ga(1-x)As and in In(0.6)Ga(0.4)As quantum dots.

  18. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  19. Open-Source Photometric System for Enzymatic Nitrate Quantification.

    PubMed

    Wittbrodt, B T; Squires, D A; Walbeck, J; Campbell, E; Campbell, W H; Pearce, J M

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique.

  20. Agile Methods for Open Source Safety-Critical Software

    PubMed Central

    Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-01-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545

  1. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    PubMed Central

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  2. Cloud based, Open Source Software Application for Mitigating Herbicide Drift

    NASA Astrophysics Data System (ADS)

    Saraswat, D.; Scott, B.

    2014-12-01

    The spread of herbicide resistant weeds has resulted in the need for clearly marked fields. In response to this need, the University of Arkansas Cooperative Extension Service launched a program named Flag the Technology in 2011. This program uses color-coded flags as a visual alert of the herbicide trait technology within a farm field. The flag based program also serves to help avoid herbicide misapplication and prevent herbicide drift damage between fields with differing crop technologies. This program has been endorsed by Southern Weed Science Society of America and is attracting interest from across the USA, Canada, and Australia. However, flags have risk of misplacement or disappearance due to mischief or severe windstorms/thunderstorms, respectively. This presentation will discuss the design and development of a cloud-based, free application utilizing open-source technologies, called Flag the Technology Cloud (FTTCloud), for allowing agricultural stakeholders to color code their farm fields for indicating herbicide resistant technologies. The developed software utilizes modern web development practices, widely used design technologies, and basic geographic information system (GIS) based interactive interfaces for representing, color-coding, searching, and visualizing fields. This program has also been made compatible for a wider usability on different size devices- smartphones, tablets, desktops and laptops.

  3. An open source simulation model for soil and sediment bioturbation.

    PubMed

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach.

  4. Digital time stamping system based on open source technologies.

    PubMed

    Miskinis, Rimantas; Smirnov, Dmitrij; Urba, Emilis; Burokas, Andrius; Malysko, Bogdan; Laud, Peeter; Zuliani, Francesco

    2010-03-01

    A digital time stamping system based on open source technologies (LINUX-UBUNTU, OpenTSA, OpenSSL, MySQL) is described in detail, including all important testing results. The system, called BALTICTIME, was developed under a project sponsored by the European Commission under the Program FP 6. It was designed to meet the requirements posed to the systems of legal and accountable time stamping and to be applicable to the hardware commonly used by the national time metrology laboratories. The BALTICTIME system is intended for the use of governmental and other institutions as well as personal bodies. Testing results demonstrate that the time stamps issued to the user by BALTICTIME and saved in BALTICTIME's archives (which implies that the time stamps are accountable) meet all the regulatory requirements. Moreover, the BALTICTIME in its present implementation is able to issue more than 10 digital time stamps per second. The system can be enhanced if needed. The test version of the BALTICTIME service is free and available at http://baltictime. pfi.lt:8080/btws/ and http://baltictime.lnmc.lv:8080/btws/.

  5. RF Wave Simulation Using the MFEM Open Source FEM Package

    NASA Astrophysics Data System (ADS)

    Stillerman, J.; Shiraiwa, S.; Bonoli, P. T.; Wright, J. C.; Green, D. L.; Kolev, T.

    2016-10-01

    A new plasma wave simulation environment based on the finite element method is presented. MFEM, a scalable open-source FEM library, is used as the basis for this capability. MFEM allows for assembling an FEM matrix of arbitrarily high order in a parallel computing environment. A 3D frequency domain RF physics layer was implemented using a python wrapper for MFEM and a cold collisional plasma model was ported. This physics layer allows for defining the plasma RF wave simulation model without user knowledge of the FEM weak-form formulation. A graphical user interface is built on πScope, a python-based scientific workbench, such that a user can build a model definition file interactively. Benchmark cases have been ported to this new environment, with results being consistent with those obtained using COMSOL multiphysics, GENRAY, and TORIC/TORLH spectral solvers. This work is a first step in bringing to bear the sophisticated computational tool suite that MFEM provides (e.g., adaptive mesh refinement, solver suite, element types) to the linear plasma-wave interaction problem, and within more complicated integrated workflows, such as coupling with core spectral solver, or incorporating additional physics such as an RF sheath potential model or kinetic effects. USDoE Awards DE-FC02-99ER54512, DE-FC02-01ER54648.

  6. Tessera: Open source software for accelerated data science

    SciTech Connect

    Sego, Landon H.; Hafen, Ryan P.; Director, Hannah M.; LaMothe, Ryan R.

    2014-06-30

    Extracting useful, actionable information from data can be a formidable challenge for the safeguards, nonproliferation, and arms control verification communities. Data scientists are often on the “front-lines” of making sense of complex and large datasets. They require flexible tools that make it easy to rapidly reformat large datasets, interactively explore and visualize data, develop statistical algorithms, and validate their approaches—and they need to perform these activities with minimal lines of code. Existing commercial software solutions often lack extensibility and the flexibility required to address the nuances of the demanding and dynamic environments where data scientists work. To address this need, Pacific Northwest National Laboratory developed Tessera, an open source software suite designed to enable data scientists to interactively perform their craft at the terabyte scale. Tessera automatically manages the complicated tasks of distributed storage and computation, empowering data scientists to do what they do best: tackling critical research and mission objectives by deriving insight from data. We illustrate the use of Tessera with an example analysis of computer network data.

  7. An open source device for operant licking in rats.

    PubMed

    Longley, Matthew; Willis, Ethan L; Tay, Cindy X; Chen, Hao

    2017-01-01

    We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior.

  8. Agile Methods for Open Source Safety-Critical Software.

    PubMed

    Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-08-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.

  9. MULTI-MODAL DATA FUSION SCHEMES FOR INTEGRATED CLASSIFICATION OF IMAGING AND NON-IMAGING BIOMEDICAL DATA

    PubMed Central

    Tiwari, Pallavi; Viswanath, Satish; Lee, George; Madabhushi, Anant

    2015-01-01

    With a wide array of multi-modal, multi-protocol, and multi-scale biomedical data available for disease diagnosis and prognosis, there is a need for quantitative tools to combine such varied channels of information, especially imaging and non-imaging data (e.g. spectroscopy, proteomics). The major problem in such quantitative data integration lies in reconciling the large spread in the range of dimensionalities and scales across the different modalities. The primary goal of quantitative data integration is to build combined meta-classifiers; however these efforts are thwarted by challenges in (1) homogeneous representation of the data channels, (2) fusing the attributes to construct an integrated feature vector, and (3) the choice of learning strategy for training the integrated classifier. In this paper, we seek to (a) define the characteristics that guide the 4 independent methods for quantitative data fusion that use the idea of a meta-space for building integrated multi-modal, multi-scale meta-classifiers, and (b) attempt to understand the key components which allowed each method to succeed. These methods include (1) Generalized Embedding Concatenation (GEC), (2) Consensus Embedding (CE), (3) Semi-Supervised Multi-Kernel Graph Embedding (SeSMiK), and (4) Boosted Embedding Combination (BEC). In order to evaluate the optimal scheme for fusing imaging and non-imaging data, we compared these 4 schemes for the problems of combining (a) multi-parametric MRI with spectroscopy for prostate cancer (CaP) diagnosis in vivo, and (b) histological image with proteomic signatures (obtained via mass spectrometry) for predicting prognosis in CaP patients. The kernel combination approach (SeSMiK) marginally outperformed the embedding combination schemes. Additionally, intelligent weighting of the data channels (based on their relative importance) appeared to outperform unweighted strategies. All 4 strategies easily outperformed a naïve decision fusion approach, suggesting that

  10. An Open Source approach to automated hydrological analysis of ungauged drainage basins in Serbia using R and SAGA

    NASA Astrophysics Data System (ADS)

    Zlatanovic, Nikola; Milovanovic, Irina; Cotric, Jelena

    2014-05-01

    Drainage basins are for the most part ungauged or poorly gauged not only in Serbia but in most parts of the world, usually due to insufficient funds, but also the decommission of river gauges in upland catchments to focus on downstream areas which are more populated. Very often, design discharges are needed for these streams or rivers where no streamflow data is available, for various applications. Examples include river training works for flood protection measures or erosion control, design of culverts, water supply facilities, small hydropower plants etc. The estimation of discharges in ungauged basins is most often performed using rainfall-runoff models, whose parameters heavily rely on geomorphometric attributes of the basin (e.g. catchment area, elevation, slopes of channels and hillslopes etc.). The calculation of these, as well as other paramaters, is most often done in GIS (Geographic Information System) software environments. This study deals with the application of freely available and open source software and datasets for automating rainfall-runoff analysis of ungauged basins using methodologies currently in use hydrological practice. The R programming language was used for scripting and automating the hydrological calculations, coupled with SAGA GIS (System for Automated Geoscientivic Analysis) for geocomputing functions and terrain analysis. Datasets used in the analyses include the freely available SRTM (Shuttle Radar Topography Mission) terrain data, CORINE (Coordination of Information on the Environment) Land Cover data, as well as soil maps and rainfall data. The choice of free and open source software and datasets makes the project ideal for academic and research purposes and cross-platform projects. The geomorphometric module was tested on more than 100 catchments throughout Serbia and compared to manually calculated values (using topographic maps). The discharge estimation module was tested on 21 catchments where data were available and compared

  11. Free and Open Source Software for land degradation vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Imbrenda, Vito; Calamita, Giuseppe; Coluzzi, Rosa; D'Emilio, Mariagrazia; Lanfredi, Maria Teresa; Perrone, Angela; Ragosta, Maria; Simoniello, Tiziana

    2013-04-01

    the vulnerability to anthropic factors mainly connected with agricultural and grazing management. To achieve the final ESAs Index depicting the overall vulnerability to degradation of the investigated area we applied the geometric mean to cross normalized indices related to each examined component. In this context QGIS was used to display data and to perform basic GIS calculations, whereas GRASS was used for map-algebra operations and image processing. Finally R was used for computing statistical analysis (Principal Component Analysis) aimed to determine the relative importance of each adopted indicator. Our results show that GRASS, QGIS and R software are suitable to map land degradation vulnerability and identify highly vulnerable areas in which rehabilitation/recovery interventions are urgent. In addition they allow us to put into evidence the most important drivers of degradation thus supplying basic information for the setting up of intervention strategies. Ultimately, Free Open Source Software deliver a fair chance for geoscientific investigations thanks to their high interoperability and flexibility enabling to preserve the accuracy of the data and to reduce processing time. Moreover, the presence of several communities that steadily support users allows for achieving high quality results, making free open source software a valuable and easy alternative to conventional commercial software.

  12. Evaluation of Open-Source Hard Real Time Software Packages

    NASA Technical Reports Server (NTRS)

    Mattei, Nicholas S.

    2004-01-01

    replacing this somewhat costly implementation is the focus of one of the SA group s current research projects. The explosion of open source software in the last ten years has led to the development of a multitude of software solutions which were once only produced by major corporations. The benefits of these open projects include faster release and bug patching cycles as well as inexpensive if not free software solutions. The main packages for hard real time solutions under Linux are Real Time Application Interface (RTAI) and two varieties of Real Time Linux (RTL), RTLFree and RTLPro. During my time here at NASA I have been testing various hard real time solutions operating as layers on the Linux Operating System. All testing is being run on an Intel SBC 2590 which is a common embedded hardware platform. The test plan was provided to me by the Software Assurance group at the start of my internship and my job has been to test the systems by developing and executing the test cases on the hardware. These tests are constructed so that the Software Assurance group can get hard test data for a comparison between the open source and proprietary implementations of hard real time solutions.

  13. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  14. TH-C-BRB-01: Open Source Hardware: General Overview.

    PubMed

    Therriault-Proulx, F

    2016-06-01

    By definition, Open Source Hardware (OSH) is "hardware whose design is made publicly available so that anyone can study, modify, distribute, make, and sell the design or hardware based on that design". The advantages of OSH are multiple and the movement has been growing exponentially over the last couple years, leading to the spread and evolution of 3D printing technologies, the creation of affordable and easy to use micro-controller boards (Arduino, Raspberry Pi, etc.), as well as a plurality of other "hands-on"/DIY projects. As we have seen over the past few years with 3D printing, where the number of projects benefiting clinical practice as grown significantly, the highly educated and technology savvy Medical Physics community is positioned to take advantage of and benefit from paradigm-shifting movements. Sharing of knowledge, know-how, and technology can be a key factor in furthering the impact medical physicists can have. Whether it is to develop phantoms, applicators, detector holders or devices based on the use of motors and sensors, sharing design files significantly enables further development. Because these designs would be massively peer-reviewed through their online publication, improvements would be made, and the creators of the design would be rewarded with an increase number of citation of their work. A curated database of software and hardware projects can be an invaluable to the field, but a critical mass of contributors is likely needed to guarantee the most impact. This symposium will discuss the benefits and hurdles for such an endeavor.

  15. An open-source chemical kinetics network: VULCAN

    NASA Astrophysics Data System (ADS)

    Tsai, Shang-Min; Lyons, James; Heng, Kevin

    2015-12-01

    I will present VULCAN, an open-source 1D chemical kinetics code suited for the temperature and pressure range relevant to observable exoplanet atmospheres. The chemical network is based on a set of reduced rate coefficients for C-H-O systems. Most of the rate coefficients are based on the NIST online database, and validated by comparing withthermodynamic equilibrium codes (TEA, STANJAN). The difference between the experimental rates and those from the thermodynamical data is carefully examined and discussed. For the numerical method, a simple, quick, semi-implicit Euler integrator is adopted to solve the stiff chemical reactions, within an operator-splitting scheme for computational efficiency.Several test runs of VULCAN are shown in a hierarchical way: pure H, H+O, H+O+C, including controlled experiments performed with a simple analytical temperature-pressure profiles, so that different parameters, such as the stellar irradiation, atmospheric opacities and albedo can be individually explored to understand how these properties affect the temperaturestructure and hence the chemical abundances. I will also revisit the "transport-induced-quenching” effects, and discuss the limitation of this approximation and its impact on observations. Finally, I will discuss the effects of C/O ratio and compare with published work in the literature.VULCAN is written in Python and is part of the publicly-available set of community tools we call the Exoclimes Simulation Platform (ESP; www.exoclime.org). I am a Ph.D student of Kevin Heng at the University of Bern, Switzerland.

  16. Acquire: an open-source comprehensive cancer biobanking system

    PubMed Central

    Dowst, Heidi; Pew, Benjamin; Watkins, Chris; McOwiti, Apollo; Barney, Jonathan; Qu, Shijing; Becnel, Lauren B.

    2015-01-01

    Motivation: The probability of effective treatment of cancer with a targeted therapeutic can be improved for patients with defined genotypes containing actionable mutations. To this end, many human cancer biobanks are integrating more tightly with genomic sequencing facilities and with those creating and maintaining patient-derived xenografts (PDX) and cell lines to provide renewable resources for translational research. Results: To support the complex data management needs and workflows of several such biobanks, we developed Acquire. It is a robust, secure, web-based, database-backed open-source system that supports all major needs of a modern cancer biobank. Its modules allow for i) up-to-the-minute ‘scoreboard’ and graphical reporting of collections; ii) end user roles and permissions; iii) specimen inventory through caTissue Suite; iv) shipping forms for distribution of specimens to pathology, genomic analysis and PDX/cell line creation facilities; v) robust ad hoc querying; vi) molecular and cellular quality control metrics to track specimens’ progress and quality; vii) public researcher request; viii) resource allocation committee distribution request review and oversight and ix) linkage to available derivatives of specimen. Availability and Implementation: Acquire implements standard controlled vocabularies, ontologies and objects from the NCI, CDISC and others. Here we describe the functionality of the system, its technological stack and the processes it supports. A test version Acquire is available at https://tcrbacquire-stg.research.bcm.edu; software is available in https://github.com/BCM-DLDCC/Acquire; and UML models, data and workflow diagrams, behavioral specifications and other documents are available at https://github.com/BCM-DLDCC/Acquire/tree/master/supplementaryMaterials. Contact: becnel@bcm.edu PMID:25573920

  17. Learning from open source software projects to improve scientific review

    PubMed Central

    Ghosh, Satrajit S.; Klein, Arno; Avants, Brian; Millman, K. Jarrod

    2012-01-01

    Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers. PMID:22529798

  18. Nanoparticles for multi-modality cancer diagnosis: Simple protocol for self-assembly of gold nanoclusters mediated by gadolinium ions.

    PubMed

    Hou, Wenxiu; Xia, Fangfang; Alfranca, Gabriel; Yan, Hao; Zhi, Xiao; Liu, Yanlei; Peng, Chen; Zhang, Chunlei; de la Fuente, Jesus Martinez; Cui, Daxiang

    2017-03-01

    It is essential to develop a simple synthetic strategy to improve the quality of multifunctional contrast agents for cancer diagnosis. Herein, we report a time-saving method for gadolinium (Gd(3+)) ions-mediated self-assembly of gold nanoclusters (GNCs) into monodisperse spherical nanoparticles (GNCNs) under mild conditions. The monodisperse, regular and colloidal stable GNCNs were formed via selectively inducing electrostatic interactions between negatively-charged carboxylic groups of gold nanoclusters and trivalent cations of gadolinium in aqueous solution. In this way, the Gd(3+) ions were chelated into GNCNs without the use of molecular gadolinium chelates. With the co-existence of GNCs and Gd(3+) ions, the formed GNCNs exhibit significant luminescence intensity enhancement for near-infrared fluorescence (NIRF) imaging, high X-ray attenuation for computed tomography (CT) imaging and reasonable r1 relaxivity for magnetic resonance (MR) imaging. The excellent biocompatibility of the GNCNs was proved both in vitro and in vivo. Meanwhile, the GNCNs also possess unique NIRF/CT/MR imaging ability in A549 tumor-bearing mice. In a nutshell, the simple and safe GNCNs hold great potential for tumor multi-modality clinical diagnosis.

  19. A non-negative matrix factorization method for detecting modules in heterogeneous omics multi-modal data

    PubMed Central

    Yang, Zi; Michailidis, George

    2016-01-01

    Motivation: Recent advances in high-throughput omics technologies have enabled biomedical researchers to collect large-scale genomic data. As a consequence, there has been growing interest in developing methods to integrate such data to obtain deeper insights regarding the underlying biological system. A key challenge for integrative studies is the heterogeneity present in the different omics data sources, which makes it difficult to discern the coordinated signal of interest from source-specific noise or extraneous effects. Results: We introduce a novel method of multi-modal data analysis that is designed for heterogeneous data based on non-negative matrix factorization. We provide an algorithm for jointly decomposing the data matrices involved that also includes a sparsity option for high-dimensional settings. The performance of the proposed method is evaluated on synthetic data and on real DNA methylation, gene expression and miRNA expression data from ovarian cancer samples obtained from The Cancer Genome Atlas. The results show the presence of common modules across patient samples linked to cancer-related pathways, as well as previously established ovarian cancer subtypes. Availability and implementation: The source code repository is publicly available at https://github.com/yangzi4/iNMF. Contact: gmichail@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26377073

  20. Multi-modal adaptive optics system including fundus photography and optical coherence tomography for the clinical setting.

    PubMed

    Salas, Matthias; Drexler, Wolfgang; Levecq, Xavier; Lamory, Barbara; Ritter, Markus; Prager, Sonja; Hafner, Julia; Schmidt-Erfurth, Ursula; Pircher, Michael

    2016-05-01

    We present a new compact multi-modal imaging prototype that combines an adaptive optics (AO) fundus camera with AO-optical coherence tomography (OCT) in a single instrument. The prototype allows acquiring AO fundus images with a field of view of 4°x4° and with a frame rate of 10fps. The exposure time of a single image is 10 ms. The short exposure time results in nearly motion artifact-free high resolution images of the retina. The AO-OCT mode allows acquiring volumetric data of the retina at 200kHz A-scan rate with a transverse resolution of ~4 µm and an axial resolution of ~5 µm. OCT imaging is acquired within a field of view of 2°x2° located at the central part of the AO fundus image. Recording of OCT volume data takes 0.8 seconds. The performance of the new system is tested in healthy volunteers and patients with retinal diseases.

  1. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  2. Brain Structure and Function Associated with a History of Sport Concussion: A Multi-Modal Magnetic Resonance Imaging Study.

    PubMed

    Churchill, Nathan; Hutchison, Michael; Richards, Doug; Leung, General; Graham, Simon; Schweizer, Tom A

    2017-02-15

    There is growing concern about the potential long-term consequences of sport concussion for young, currently active athletes. However, there remains limited information about brain abnormalities associated with a history of concussion and how they relate to clinical factors. In this study, advanced MRI was used to comprehensively describe abnormalities in brain structure and function associated with a history of sport concussion. Forty-three athletes (21 male, 22 female) were recruited from interuniversity teams at the beginning of the season, including 21 with a history of concussion and 22 without prior concussion; both groups also contained a balanced sample of contact and noncontact sports. Multi-modal MRI was used to evaluate abnormalities in brain structure and function. Athletes with a history of concussion showed frontal decreases in brain volume and blood flow. However, they also demonstrated increased posterior cortical volume and elevated markers of white matter microstructure. A greater number of prior concussions was associated with more extensive decreases in cerebral blood flow and insular volume, whereas recovery time from most recent concussion was correlated with reduced frontotemporal volume. White matter showed limited correlations with clinical factors, predominantly in the anterior corona radiata. This study provides the first evidence of the long-term effects of concussion on gray matter volume, blood flow, and white matter microstructure within a single athlete cohort. This was examined for a mixture of male and female athletes in both contact and noncontact sports, demonstrating the relevance of these findings for the overall sporting community.

  3. Automated segmentation of corticospinal tract in diffusion tensor images via multi-modality multi-atlas fusion

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoying; Mori, Susumu; Miller, Michael I.

    2014-03-01

    In this paper, we propose a method to automatically segment the corticospinal tract (CST) in diffusion tensor images (DTIs) by incorporating the anatomical features from multi-modality images generated in DTI using multiple DTI atlases. The to-be-segmented test subject, and each atlas, is comprised of images with different modalities - the mean diffusivity, the fractional anisotropy, and the images representing the three elements of the primary eigenvector. Each atlas had a paired image containing the manually delineated segmentations of the three regions of interest - the left and right CST and the background surrounding the CST. We solve the problem via maximum a posteriori estimation using generative models. Each modality image is modeled as a conditional Gaussian mixture random field, conditioned on the atlas-label pair and the local change of coordinates for each label. The expectation-maximization algorithm is used to alternatively estimate the local optimal diffeomorphisms for each label and the maximizing segmentations. The algorithm is evaluated on six subjects with a wide range of pathology. We compare the proposed method with two state-of-the-art multi-atlas based label fusion methods, against which the method displayed a high level of accuracy.

  4. Expanding neurochemical investigations with multi-modal recording: simultaneous fast-scan cyclic voltammetry, iontophoresis, and patch clamp measurements.

    PubMed

    Kirkpatrick, D C; McKinney, C J; Manis, P B; Wightman, R M

    2016-08-02

    Multi-modal recording describes the simultaneous collection of information across distinct domains. Compared to isolated measurements, such studies can more easily determine relationships between varieties of phenomena. This is useful for neurochemical investigations which examine cellular activity in response to changes in the local chemical environment. In this study, we demonstrate a method to perform simultaneous patch clamp measurements with fast-scan cyclic voltammetry (FSCV) using optically isolated instrumentation. A model circuit simulating concurrent measurements was used to predict the electrical interference between instruments. No significant impact was anticipated between methods, and predictions were largely confirmed experimentally. One exception was due to capacitive coupling of the FSCV potential waveform into the patch clamp amplifier. However, capacitive transients measured in whole-cell current clamp recordings were well below the level of biological signals, which allowed the activity of cells to be easily determined. Next, the activity of medium spiny neurons (MSNs) was examined in the presence of an FSCV electrode to determine how the exogenous potential impacted nearby cells. The activities of both resting and active MSNs were unaffected by the FSCV waveform. Additionally, application of an iontophoretic current, used to locally deliver drugs and other neurochemicals, did not affect neighboring cells. Finally, MSN activity was monitored during iontophoretic delivery of glutamate, an excitatory neurotransmitter. Membrane depolarization and cell firing were observed concurrently with chemical changes around the cell resulting from delivery. In all, we show how combined electrophysiological and electrochemical measurements can relate information between domains and increase the power of neurochemical investigations.

  5. Active vibration control of structure by Active Mass Damper and Multi-Modal Negative Acceleration Feedback control algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Don-Ho; Shin, Ji-Hwan; Lee, HyunWook; Kim, Seoug-Ki; Kwak, Moon K.

    2017-03-01

    In this study, an Active Mass Damper (AMD) consisting of an AC servo motor, a movable mass connected to the AC servo motor by a ball-screw mechanism, and an accelerometer as a sensor for vibration measurement were considered. Considering the capability of the AC servo motor which can follow the desired displacement accurately, the Negative Acceleration Feedback (NAF) control algorithm which uses the acceleration signal directly and produces the desired displacement for the active mass was proposed. The effectiveness of the NAF control was proved theoretically using a single-degree-of-freedom (SDOF) system. It was found that the stability condition for the NAF control is static and it can effectively increase the damping of the target natural mode without causing instability in the low frequency region. Based on the theoretical results of the SDOF system, the Multi-Modal NAF (MMNAF) control is proposed to suppress the many natural modes of multi-degree-of-freedom (MDOF) systems using a single AMD. It was proved both theoretically and experimentally that the MMNAF control can suppress vibrations of the MDOF system.

  6. "Combo" nanomedicine: Co-delivery of multi-modal therapeutics for efficient, targeted, and safe cancer therapy.

    PubMed

    Kemp, Jessica A; Shim, Min Suk; Heo, Chan Yeong; Kwon, Young Jik

    2016-03-01

    The dynamic and versatile nature of diseases such as cancer has been a pivotal challenge for developing efficient and safe therapies. Cancer treatments using a single therapeutic agent often result in limited clinical outcomes due to tumor heterogeneity and drug resistance. Combination therapies using multiple therapeutic modalities can synergistically elevate anti-cancer activity while lowering doses of each agent, hence, reducing side effects. Co-administration of multiple therapeutic agents requires a delivery platform that can normalize pharmacokinetics and pharmacodynamics of the agents, prolong circulation, selectively accumulate, specifically bind to the target, and enable controlled release in target site. Nanomaterials, such as polymeric nanoparticles, gold nanoparticles/cages/shells, and carbon nanomaterials, have the desired properties, and they can mediate therapeutic effects different from those generated by small molecule drugs (e.g., gene therapy, photothermal therapy, photodynamic therapy, and radiotherapy). This review aims to provide an overview of developing multi-modal therapies using nanomaterials ("combo" nanomedicine) along with the rationale, up-to-date progress, further considerations, and the crucial roles of interdisciplinary approaches.

  7. An efficient nano-based theranostic system for multi-modal imaging-guided photothermal sterilization in gastrointestinal tract.

    PubMed

    Liu, Zhen; Liu, Jianhua; Wang, Rui; Du, Yingda; Ren, Jinsong; Qu, Xiaogang

    2015-07-01

    Since understanding the healthy status of gastrointestinal tract (GI tract) is of vital importance, clinical implementation for GI tract-related disease have attracted much more attention along with the rapid development of modern medicine. Here, a multifunctional theranostic system combining X-rays/CT/photothermal/photoacoustic mapping of GI tract and imaging-guided photothermal anti-bacterial treatment is designed and constructed. PEGylated W18O49 nanosheets (PEG-W18O49) are created via a facile solvothermal method and an in situ probe-sonication approach. In terms of excellent colloidal stability, low cytotoxicity, and neglectable hemolysis of PEG-W18O49, we demonstrate the first example of high-performance four-modal imaging of GI tract by using these nanosheets as contrast agents. More importantly, due to their intrinsic absorption of NIR light, glutaraldehyde-modified PEG-W18O49 are successfully applied as fault-free targeted photothermal agents for imaging-guided killing of bacteria on a mouse infection model. Critical to pre-clinical and clinical prospects, long-term toxicity is further investigated after oral administration of these theranostic agents. These kinds of tungsten-based nanomaterials exhibit great potential as multi-modal contrast agents for directed visualization of GI tract and anti-bacterial agents for phothothermal sterilization.

  8. Multi-modal miniature microscope: 4M Device for bio-imaging applications - an overview of the system

    NASA Astrophysics Data System (ADS)

    Tkaczyk, Tomasz S.; Rogers, Jeremy D.; Rahman, Mohammed; Christenson, Todd C.; Gaalema, Stephen; Dereniak, Eustace L.; Richards-Kortum, Rebecca; Descour, Michael R.

    2005-09-01

    The multi-modal miniature microscope (4M) device to image morphology and cytochemistry in vivo is a microscope on a chip including optical, micro-mechanical, and electronic components. This paper describes all major system components: optical system, custom high speed CMOS detector and comb drive actuator. The hybrid sol-gel lenses, their fabrication and assembling technology, optical system parameters, and various operation modes (fluorescence, reflectance, structured illumination) are also discussed. A particularly interesting method is a structured illumination technique that delivers confocal-imaging capabilities and may be used for optical sectioning. For reconstruction of the sectioned layer a sine approximation algorithm is applied. Structured illumination is produced with LIGA fabricated actuator scanning in resonance. The spatial resolution of the system is 1 μm, and was magnified by 4x matching the CMOS pixel size of 4 μm (a lateral magnification is 4:1), and the extent of field of the system is 250μm. An overview of the 4M device is combined with the presentation of imaging results for epithelial cell phantoms with optical properties characteristic of normal and cancerous tissue labeled with nanoparticles.

  9. Effects of Multi-modal Physiotherapy, Including Hip Abductor Strengthening, in Patients with Iliotibial Band Friction Syndrome

    PubMed Central

    Beers, Amanda; Ryan, Michael; Kasubuchi, Zenya; Fraser, Scott

    2008-01-01

    Purpose: The purposes of this study were to quantitatively examine hip abductor strength in patients presenting with iliotibial band friction syndrome (ITBFS) and to determine whether a multi-modal physiotherapy approach, including hip abductor strengthening, might play a role in recovery. Method: Our observational, pretest–posttest study is one of the first prospective studies in this area. Patients presenting to physiotherapy with unilateral ITBFS were recruited to participate. Participants followed a 6-week rehabilitation programme designed to strengthen hip abductors; strength was measured every 2 weeks using a hand-held dynamometer and compared bilaterally. Results: Sixteen subjects (five men, 11 women) aged 20 to 53 years participated. All but 2 reported running as one of their main physical activities. A trend toward a significant difference in hip abductor strength was found between the injured and uninjured sides at baseline, but this difference disappeared by 6 weeks. Hip abductor strength was significantly related to physical function at weeks 2, 4, and 6. Nine subjects were discharged from physiotherapy after the 6-week period, while the other 7 subjects continued attending for up to 5 months. Conclusions: Hip abductor strengthening appeared to be beneficial in the treatment of ITBFS, but further research on the use of hip abductor strengthening for treatment and prevention of ITBFS is needed. PMID:20145781

  10. 3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients

    PubMed Central

    Nie, Dong; Zhang, Han; Adeli, Ehsan; Liu, Luyan

    2016-01-01

    High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1–2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications. PMID:28149967

  11. 3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.

    PubMed

    Nie, Dong; Zhang, Han; Adeli, Ehsan; Liu, Luyan; Shen, Dinggang

    2016-10-01

    High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.

  12. Multi-modal adaptive optics system including fundus photography and optical coherence tomography for the clinical setting

    PubMed Central

    Salas, Matthias; Drexler, Wolfgang; Levecq, Xavier; Lamory, Barbara; Ritter, Markus; Prager, Sonja; Hafner, Julia; Schmidt-Erfurth, Ursula; Pircher, Michael

    2016-01-01

    We present a new compact multi-modal imaging prototype that combines an adaptive optics (AO) fundus camera with AO-optical coherence tomography (OCT) in a single instrument. The prototype allows acquiring AO fundus images with a field of view of 4°x4° and with a frame rate of 10fps. The exposure time of a single image is 10 ms. The short exposure time results in nearly motion artifact-free high resolution images of the retina. The AO-OCT mode allows acquiring volumetric data of the retina at 200kHz A-scan rate with a transverse resolution of ~4 µm and an axial resolution of ~5 µm. OCT imaging is acquired within a field of view of 2°x2° located at the central part of the AO fundus image. Recording of OCT volume data takes 0.8 seconds. The performance of the new system is tested in healthy volunteers and patients with retinal diseases. PMID:27231621

  13. Single-Step Assembly of Multi-Modal Imaging Nanocarriers: MRI and Long-Wavelength Fluorescence Imaging

    PubMed Central

    Pinkerton, Nathalie M.; Gindy, Marian E.; Calero-DdelC, Victoria L.; Wolfson, Theodore; Pagels, Robert F.; Adler, Derek; Gao, Dayuan; Li, Shike; Wang, Ruobing; Zevon, Margot; Yao, Nan; Pacheco, Carlos; Therien, Michael J.; Rinaldi, Carlos; Sinko, Patrick J.

    2015-01-01

    MRI and NIR-active, multi-modal Composite NanoCarriers (CNCs) are prepared using a simple, one-step process, Flash NanoPrecipitation (FNP). The FNP process allows for the independent control of the hydrodynamic diameter, co-core excipient and NIR dye loading, and iron oxide-based nanocrystal (IONC) content of the CNCs. In the controlled precipitation process, 10 nm IONCs are encapsulated into poly(ethylene glycol) stabilized CNCs to make biocompatible T2 contrast agents. By adjusting the formulation, CNC size is tuned between 80 and 360 nm. Holding the CNC size constant at an intensity weighted average diameter of 99 ± 3 nm (PDI width 28 nm), the particle relaxivity varies linearly with encapsulated IONC content ranging from 66 to 533 mM-1s-1 for CNCs formulated with 4 to 16 wt% IONC. To demonstrate the use of CNCs as in vivo MRI contrast agents, CNCs are surface functionalized with liver targeting hydroxyl groups. The CNCs enable the detection of 0.8 mm3 non-small cell lung cancer metastases in mice livers via MRI. Incorporating the hydrophobic, NIR dye PZn3 into CNCs enables complementary visualization with long-wavelength fluorescence at 800 nm. In vivo imaging demonstrates the ability of CNCs to act both as MRI and fluorescent imaging agents. PMID:25925128

  14. Enabling Low-Power, Multi-Modal Neural Interfaces Through a Common, Low-Bandwidth Feature Space.

    PubMed

    Irwin, Zachary T; Thompson, David E; Schroeder, Karen E; Tat, Derek M; Hassani, Ali; Bullard, Autumn J; Woo, Shoshana L; Urbanchek, Melanie G; Sachs, Adam J; Cederna, Paul S; Stacey, William C; Patil, Parag G; Chestek, Cynthia A

    2016-05-01

    Brain-Machine Interfaces (BMIs) have shown great potential for generating prosthetic control signals. Translating BMIs into the clinic requires fully implantable, wireless systems; however, current solutions have high power requirements which limit their usability. Lowering this power consumption typically limits the system to a single neural modality, or signal type, and thus to a relatively small clinical market. Here, we address both of these issues by investigating the use of signal power in a single narrow frequency band as a decoding feature for extracting information from electrocorticographic (ECoG), electromyographic (EMG), and intracortical neural data. We have designed and tested the Multi-modal Implantable Neural Interface (MINI), a wireless recording system which extracts and transmits signal power in a single, configurable frequency band. In prerecorded datasets, we used the MINI to explore low frequency signal features and any resulting tradeoff between power savings and decoding performance losses. When processing intracortical data, the MINI achieved a power consumption 89.7% less than a more typical system designed to extract action potential waveforms. When processing ECoG and EMG data, the MINI achieved similar power reductions of 62.7% and 78.8%. At the same time, using the single signal feature extracted by the MINI, we were able to decode all three modalities with less than a 9% drop in accuracy relative to using high-bandwidth, modality-specific signal features. We believe this system architecture can be used to produce a viable, cost-effective, clinical BMI.

  15. Barriers to open source software adoption in Quebec's health care organizations.

    PubMed

    Paré, Guy; Wybo, Michael D; Delannoy, Charles

    2009-02-01

    We conducted in-depth interviews with 15 CIOs to identify the principal impediments to adoption of open source software in the Quebec health sector. We found that key factors for not adopting an open source solution were closely linked to the orientations of ministry level policy makers and a seeming lack of information on the part of operational level IT managers concerning commercially oriented open source providers. We use the case of recent changes in the structure of Quebec's health care organizations and a change in the commercial policies of a key vendor to illustrate our conclusions regarding barriers to adoption of open source products.

  16. Common characteristics of open source software development and applicability for drug discovery: a systematic review

    PubMed Central

    2011-01-01

    Background Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. Methods A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Results Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. Conclusions We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents. PMID:21955914

  17. Identifying duplicate crystal structures: XTALCOMP, an open-source solution

    NASA Astrophysics Data System (ADS)

    Lonie, David C.; Zurek, Eva

    2012-03-01

    We describe the implementation of XTALCOMP, an efficient, reliable, and open-source library that tests if two crystal descriptions describe the same underlying structure. The algorithm has been tested and found to correctly identify duplicate structures in spite of the "real-world" difficulties that arise from working with numeric crystal representations: degenerate unit cell lattices, numerical noise, periodic boundaries, and the lack of a canonical coordinate origin. The library is portable, open, and not dependent on any external packages. A web interface to the algorithm is publicly accessible at http://xtalopt.openmolecules.net/xtalcomp/xtalcomp.html. Program summaryProgram title: XtalComp Catalogue identifier: AEKV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: "New" (3-clause) BSD [1] No. of lines in distributed program, including test data, etc.: 3148 No. of bytes in distributed program, including test data, etc.: 21 860 Distribution format: tar.gz Programming language: C++ Computer: No restrictions Operating system: All operating systems with a compliant C++ compiler. Classification: 7.8 Nature of problem: Computationally identifying duplicate crystal structures taken from the output of modern solid state calculations is a non-trivial exercise for many reasons. The translation vectors in the description are not unique — they may be transformed into linear combinations of themselves and continue to describe the same extended structure. The coordinates and cell parameters contain numerical noise. The periodic boundary conditions at the unit cell faces, edges, and corners can cause very small displacements of atomic coordinates to result in very different representations. The positions of all atoms may be uniformly translated by an arbitrary vector without modifying the underlying structure. Additionally, certain

  18. Ancient Secrets of Open-Source Geoscience Software Management (Invited)

    NASA Astrophysics Data System (ADS)

    Zender, C. S.

    2009-12-01

    Geoscience research often involves complex models and data analysis designed to test increasingly sophisticated theories. Re-use and improvement of existing models and tools can be more efficient than their re-invention, and this re-use can accelerate knowledge generation and discovery. Open Source Software (OSS) is designed and intended to be re-used, extended, and improved. Hence Earth and Space Science Models (ESSMs) intended for community use are commonly distributed with OSS or OSS-like licenses. Why is it that, despite their permissive licenses, only a relatively small fraction of ESSMs receive community adoption, improvement, and extension? One reason is that developing community geoscience software remains a difficult and perilous exercise for the practicing researcher. This presentation will intercompare the rationale and results of different software management approaches taken in my dozen years as a developer and maintainer of, and participant in, four distinct ESSMs with 10 to 10,000 users. The primary lesson learned is that geoscience research is similar to the wider OSS universe in that most participants are motivated by the desire for greater professional recognition and attribution best summarized as "mindshare". ESSM adoption often hinges on whether the tension between users and developers for mindshare manifests as cooperation or competition. ESSM project management, therefore, should promote (but not require) recognition of all contributors. More practical model management practices include mailing lists, highly visible documentation, consistent APIs, regression tests, and periodic releases to improve features and fix bugs and builds. However, most ESSMs originate as working incarnations of short-term (~three year) research projects and, as such, lack permanent institutional support. Adhering to best software practices to transition these ESSMs from personal to community models often requires sacrificing research time. Recently, funding agencies

  19. The Geoinformatica free and open source software stack

    NASA Astrophysics Data System (ADS)

    Jolma, A.

    2012-04-01

    The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object

  20. THOR: an open-source exo-GCM

    NASA Astrophysics Data System (ADS)

    Grosheintz, Luc; Mendonça, João; Käppeli, Roger; Lukas Grimm, Simon; Mishra, Siddhartha; Heng, Kevin

    2015-12-01

    implicit GCM. By ESS3, I hope to present results for the advection equation.THOR is part of the Exoclimes Simulation Platform (ESP), a set of open-source community codes for simulating and understanding the atmospheres of exoplanets. The ESP also includes tools for radiative transfer and retrieval (HELIOS), an opacity calculator (HELIOS-K), and a chemical kinetics solver (VULCAN). We expect to publicly release an initial version of THOR in 2016 on www.exoclime.org.

  1. EHDViz: clinical dashboard development using open-source technologies

    PubMed Central

    Badgeley, Marcus A; Shameer, Khader; Glicksberg, Benjamin S; Tomlinson, Max S; Levin, Matthew A; McCormick, Patrick J; Kasarskis, Andrew; Reich, David L; Dudley, Joel T

    2016-01-01

    -driven precision medicine. As an open-source visualisation framework capable of integrating health assessment, EHDViz aims to be a valuable toolkit for rapid design, development and implementation of scalable clinical data visualisation dashboards. PMID:27013597

  2. A flexible open-source toolkit for lava flow simulations

    NASA Astrophysics Data System (ADS)

    Mossoux, Sophie; Feltz, Adelin; Poppe, Sam; Canters, Frank; Kervyn, Matthieu

    2014-05-01

    Lava flow hazard modeling is a useful tool for scientists and stakeholders confronted with imminent or long term hazard from basaltic volcanoes. It can improve their understanding of the spatial distribution of volcanic hazard, influence their land use decisions and improve the city evacuation during a volcanic crisis. Although a range of empirical, stochastic and physically-based lava flow models exists, these models are rarely available or require a large amount of physical constraints. We present a GIS toolkit which models lava flow propagation from one or multiple eruptive vents, defined interactively on a Digital Elevation Model (DEM). It combines existing probabilistic (VORIS) and deterministic (FLOWGO) models in order to improve the simulation of lava flow spatial spread and terminal length. Not only is this toolkit open-source, running in Python, which allows users to adapt the code to their needs, but it also allows users to combine the models included in different ways. The lava flow paths are determined based on the probabilistic steepest slope (VORIS model - Felpeto et al., 2001) which can be constrained in order to favour concentrated or dispersed flow fields. Moreover, the toolkit allows including a corrective factor in order for the lava to overcome small topographical obstacles or pits. The lava flow terminal length can be constrained using a fixed length value, a Gaussian probability density function or can be calculated based on the thermo-rheological properties of the open-channel lava flow (FLOWGO model - Harris and Rowland, 2001). These slope-constrained properties allow estimating the velocity of the flow and its heat losses. The lava flow stops when its velocity is zero or the lava temperature reaches the solidus. Recent lava flows of Karthala volcano (Comoros islands) are here used to demonstrate the quality of lava flow simulations with the toolkit, using a quantitative assessment of the match of the simulation with the real lava flows. The

  3. Implementation of an OAIS Repository Using Free, Open Source Software

    NASA Astrophysics Data System (ADS)

    Flathers, E.; Gessler, P. E.; Seamon, E.

    2015-12-01

    The Northwest Knowledge Network (NKN) is a regional data repository located at the University of Idaho that focuses on the collection, curation, and distribution of research data. To support our home institution and others in the region, we offer services to researchers at all stages of the data lifecycle—from grant application and data management planning to data distribution and archive. In this role, we recognize the need to work closely with other data management efforts at partner institutions and agencies, as well as with larger aggregation efforts such as our state geospatial data clearinghouses, data.gov, DataONE, and others. In the past, one of our challenges with monolithic, prepackaged data management solutions is that customization can be difficult to implement and maintain, especially as new versions of the software are released that are incompatible with our local codebase. Our solution is to break the monolith up into its constituent parts, which offers us several advantages. First, any customizations that we make are likely to fall into areas that can be accessed through Application Program Interfaces (API) that are likely to remain stable over time, so our code stays compatible. Second, as components become obsolete or insufficient to meet new demands that arise, we can replace the individual components with minimal effect on the rest of the infrastructure, causing less disruption to operations. Other advantages include increased system reliability, staggered rollout of new features, enhanced compatibility with legacy systems, reduced dependence on a single software company as a point of failure, and the separation of development into manageable tasks. In this presentation, we describe our application of the Service Oriented Architecture (SOA) design paradigm to assemble a data repository that conforms to the Open Archival Information System (OAIS) Reference Model primarily using a collection of free and open-source software. We detail the design

  4. Combined multi-modal photoacoustic tomography, optical coherence tomography (OCT) and OCT angiography system with an articulated probe for in vivo human skin structure and vasculature imaging

    PubMed Central

    Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang

    2016-01-01

    Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106

  5. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  6. A Comprehensive Structural Analysis Process for Failure Assessment in Aircraft Lap-Joint Mimics Using Multi-Modal Fusion of NDE Data (Preprint)

    DTIC Science & Technology

    2012-07-01

    modality . In order to address the limitations of FEM-based methods in their ability to predict fatigue, more specialized numerical modeling...AFRL-RX-WP-TP-2012-0350 A COMPREHENSIVE STRUCTURAL ANALYSIS PROCESS FOR FAILURE ASSESSMENT IN AIRCRAFT LAP-JOINT MIMICS USING MULTI- MODAL ...structural analysis process is presented that includes intra- and inter- modal NDE data fusion. The process includes defect detection, defect

  7. A heart team and multi-modality imaging approach to percutaneous closure of a post-myocardial infarction ventricular septal defect

    PubMed Central

    Iyer, Sunil; Bauer, Thurston; Yeung, Michael; Ramm, Cassandra; Kiser, Andy C.; Caranasos, Thomas G.

    2016-01-01

    Post-infarction ventricular septal defect (PI-VSD) is a devastating complication that carries a high mortality with or without surgical repair. Percutaneous closure is an attractive alternative in select patients though requires appropriate characterization of the PI-VSD as well as careful device and patient selection. We describe a multidisciplinary and multi-modality imaging approach to successful percutaneous closure of a PI-VSD. PMID:27054108

  8. MeTA studio: a cross platform, programmable IDE for computational chemist.

    PubMed

    Ganesh, V

    2009-03-01

    The development of a cross-platform, programmable integrated development environment (IDE), MeTA Studio, specifically tailored but not restricted to computational chemists working in the area of quantum chemistry with an emphasis on handling large molecules is presented. The IDE consists of a number of modules which include a visualizer and a programming and collaborative framework. The inbuilt viewer assists in visualizing molecules, their scalar fields, manually fragmenting a molecule, and introduces some innovative but simple techniques for handling large molecules. These include a simple Find language and simultaneous multiple camera views of the molecule. Basic tools needed to handle collaborative computing effectively are also included opening up new vistas for sharing ideas and information among computational chemists working on similar problems. MeTA Studio is an integrated programming environment that provides a rich set of application programming interfaces (APIs) which can be used to easily extend its functionality or build new applications as needed by the users. (http://code.google.com/p/metastudio/).

  9. A cross-platform GUI to control instruments compliant with SCPI through VISA

    NASA Astrophysics Data System (ADS)

    Roach, Eric; Liu, Jing

    2015-10-01

    In nuclear physics experiments, it is necessary and important to control instruments from a PC, which automates many tasks that require human operations otherwise. Not only does this make long term measurements possible, but it also makes repetitive operations less error-prone. We created a graphical user interface (GUI) to control instruments connected to a PC through RS232, USB, LAN, etc. The GUI is developed using Qt Creator, a cross-platform integrated development environment, which makes it portable to various operating systems, including those commonly used in mobile devices. NI-VISA library is used in the back end so that the GUI can be used to control instruments connected through various I/O interfaces without any modification. Commonly used SCPI commands can be sent to different instruments using buttons, sliders, knobs, and other various widgets provided by Qt Creator. As an example, we demonstrate how we set and fetch parameters and how to retrieve and display data from an Agilent Digital Storage Oscilloscope X3034A with the GUI. Our GUI can be easily used for other instruments compliant with SCPI and VISA with little or no modification.

  10. Introducing StatHand: A Cross-Platform Mobile Application to Support Students’ Statistical Decision Making

    PubMed Central

    Allen, Peter J.; Roberts, Lynne D.; Baughman, Frank D.; Loxton, Natalie J.; Van Rooy, Dirk; Rock, Adam J.; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students’ statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand. PMID:26973579

  11. Introducing StatHand: A Cross-Platform Mobile Application to Support Students' Statistical Decision Making.

    PubMed

    Allen, Peter J; Roberts, Lynne D; Baughman, Frank D; Loxton, Natalie J; Van Rooy, Dirk; Rock, Adam J; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students' statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand.

  12. Linked statistical shape models for multi-modal segmentation: application to prostate CT-MR segmentation in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Chowdhury, Najeeb; Chappelow, Jonathan; Toth, Robert; Kim, Sung; Hahn, Stephen; Vapiwala, Neha; Lin, Haibo; Both, Stefan; Madabhushi, Anant

    2011-03-01

    We present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate delineations of a SOI's boundary on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. We apply the LSSM in the context of multi-modal prostate segmentation for radiotherapy planning, where we segment the prostate on MRI and CT simultaneously. Prostate capsule segmentation is a critical step in prostate radiotherapy planning, where dose plans have to be formulated on CT. Since accurate delineations of the prostate boundary are very difficult to obtain on CT, pre-treatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to do compared to CT. Hence, our framework incorporates multi-modal registration of MRI and CT to map 2D boundary delineations of prostate (obtained from an expert radiation oncologist) on MR training images onto corresponding CT images. The delineations of the prostate capsule on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. We acquired 7 MRI-CT patient studies and used the leave-one-out strategy to train and evaluate our LSSM (fLSSM), built using expert ground truth delineations on MRI and MRI-CT fusion derived capsule delineations on CT. A unique attribute of our fLSSM is that it does not require expert delineations of the capsule on CT. In order to perform prostate MRI segmentation using the fLSSM, we employed a regionbased approach where we deformed the evolving prostate boundary to optimize a mutual information based cost criterion, which took into account region-based intensity statistics of the image being segmented. The final prostate segmentation was then

  13. Building an EEG-fMRI Multi-Modal Brain Graph: A Concurrent EEG-fMRI Study

    PubMed Central

    Yu, Qingbao; Wu, Lei; Bridwell, David A.; Erhardt, Erik B.; Du, Yuhui; He, Hao; Chen, Jiayu; Liu, Peng; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D.

    2016-01-01

    The topological architecture of brain connectivity has been well-characterized by graph theory based analysis. However, previous studies have primarily built brain graphs based on a single modality of brain imaging data. Here we develop a framework to construct multi-modal brain graphs using concurrent EEG-fMRI data which are simultaneously collected during eyes open (EO) and eyes closed (EC) resting states. FMRI data are decomposed into independent components with associated time courses by group independent component analysis (ICA). EEG time series are segmented, and then spectral power time courses are computed and averaged within 5 frequency bands (delta; theta; alpha; beta; low gamma). EEG-fMRI brain graphs, with EEG electrodes and fMRI brain components serving as nodes, are built by computing correlations within and between fMRI ICA time courses and EEG spectral power time courses. Dynamic EEG-fMRI graphs are built using a sliding window method, versus static ones treating the entire time course as stationary. In global level, static graph measures and properties of dynamic graph measures are different across frequency bands and are mainly showing higher values in eyes closed than eyes open. Nodal level graph measures of a few brain components are also showing higher values during eyes closed in specific frequency bands. Overall, these findings incorporate fMRI spatial localization and EEG frequency information which could not be obtained by examining only one modality. This work provides a new approach to examine EEG-fMRI associations within a graph theoretic framework with potential application to many topics. PMID:27733821

  14. Multi-modal data fusion using source separation: Two effective models based on ICA and IVA and their properties

    PubMed Central

    Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.

    2015-01-01

    Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830

  15. WE-D-9A-04: Improving Multi-Modality Image Registration Using Edge-Based Transformations

    SciTech Connect

    Wang, Y; Tyagi, N; Veeraraghavan, H; Deasy, J

    2014-06-15

    Purpose: Multi-modality deformable image registration (DIR) for head and neck (HN) radiotherapy is difficult, particularly when matching computed tomography (CT) scans with magnetic resonance imaging (MRI) scans. We hypothesized that the ‘shared information’ between images of different modalities was to be found in some form of edge-based transformation, and that novel edge-based DIR methods might outperform standard DIR methods. Methods: We propose a novel method that combines gray-scale edge-based morphology and mutual information (MI) in two stages. In the first step, we applied a modification of a previously published mathematical morphology method as an efficient gray scale edge estimator, with denoising function. The results were fed into a MI-based solver (plastimatch). The method was tested on 5 HN patients with pretreatment CT and MR datasets and associated follow-up weekly MR scans. The followup MRs showed significant regression in tumor and normal structure volumes as compared to the pretreatment MRs. The MR images used in this study were obtained using fast spin echo based T2w images with a 1 mm isotropic resolution and FOV matching the CT scan. Results: In all cases, the novel edge-based registration method provided better registration quality than MI-based DIR using the original CT and MRI images. For example, the mismatch in carotid arteries was reduced from 3–5 mm to within 2 mm. The novel edge-based method with different registration regulation parameters did not show any distorted deformations as compared to the non-realistic deformations resulting from MI on the original images. Processing time was 1.3 to 2 times shorter (edge vs. non-edge). In general, we observed quality improvement and significant calculation time reduction with the new method. Conclusion: Transforming images to an ‘edge-space,’ if designed appropriately, greatly increases the speed and accuracy of DIR.

  16. Experimental verification of a novel MEMS multi-modal vibration energy harvester for ultra-low power remote sensing nodes

    NASA Astrophysics Data System (ADS)

    Iannacci, J.; Sordo, G.; Serra, E.; Kucera, M.; Schmid, U.

    2015-05-01

    In this work, we discuss the verification and preliminary experimental characterization of a MEMS-based vibration Energy Harvester (EH) design. The device, named Four-Leaf Clover (FLC), is based on a circular-shaped mechanical resonator with four petal-like mass-spring cascaded systems. This solution introduces several mechanical Degrees of Freedom (DOFs), and therefore enables multiple resonant modes and deformation shapes in the vibrations frequency range of interest. The target is to realize a wideband multi-modal EH-MEMS device, that overcomes the typical narrowband working characteristics of standard cantilevered EHs, by ensuring flexible and adaptable power source to ultra-low power electronics for integrated remote sensing nodes (e.g. Wireless Sensor Networks - WSNs) in the Internet of Things (IoT) scenario, aiming to self-powered and energy autonomous smart systems. Finite Element Method simulations of the FLC EH-MEMS show the presence of several resonant modes for vibrations up to 4-5 kHz, and level of converted power up to a few μW at resonance and in closed-loop conditions (i.e. with resistive load). On the other hand, the first experimental tests of FLC fabricated samples, conducted with a Laser Doppler Vibrometer (LDV), proved the presence of several resonant modes, and allowed to validate the accuracy of the FEM modeling method. Such a good accordance holds validity for what concerns the coupled field behavior of the FLC EH-MEMS, as well. Both measurements and simulations performed at 190 Hz (i.e. out of resonance) showed the generation of power in the range of nW (Root Mean Square - RMS values). Further steps of this work will include the experimental characterization in a full range of vibrations, aiming to prove the whole functionality of the FLC EH-MEMS proposed design concept.

  17. Building an EEG-fMRI Multi-Modal Brain Graph: A Concurrent EEG-fMRI Study.

    PubMed

    Yu, Qingbao; Wu, Lei; Bridwell, David A; Erhardt, Erik B; Du, Yuhui; He, Hao; Chen, Jiayu; Liu, Peng; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D

    2016-01-01

    The topological architecture of brain connectivity has been well-characterized by graph theory based analysis. However, previous studies have primarily built brain graphs based on a single modality of brain imaging data. Here we develop a framework to construct multi-modal brain graphs using concurrent EEG-fMRI data which are simultaneously collected during eyes open (EO) and eyes closed (EC) resting states. FMRI data are decomposed into independent components with associated time courses by group independent component analysis (ICA). EEG time series are segmented, and then spectral power time courses are computed and averaged within 5 frequency bands (delta; theta; alpha; beta; low gamma). EEG-fMRI brain graphs, with EEG electrodes and fMRI brain components serving as nodes, are built by computing correlations within and between fMRI ICA time courses and EEG spectral power time courses. Dynamic EEG-fMRI graphs are built using a sliding window method, versus static ones treating the entire time course as stationary. In global level, static graph measures and properties of dynamic graph measures are different across frequency bands and are mainly showing higher values in eyes closed than eyes open. Nodal level graph measures of a few brain components are also showing higher values during eyes closed in specific frequency bands. Overall, these findings incorporate fMRI spatial localization and EEG frequency information which could not be obtained by examining only one modality. This work provides a new approach to examine EEG-fMRI associations within a graph theoretic framework with potential application to many topics.

  18. Effective Beginning Handwriting Instruction: Multi-modal, Consistent Format for 2 Years, and Linked to Spelling and Composing.

    PubMed

    Wolf, Beverly; Abbott, Robert D; Berninger, Virginia W

    2017-02-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N =16 first graders, M = 7 years 1 month, 7 girls) received manuscript handwriting instruction not systematically related to the other literacy activities. ANOVA showed both groups improved on automatic alphabet writing from memory; but ANCOVA with the automatic alphabet writing task as covariate showed that the treatment group improved significantly more than control group from the second to ninth month of first grade on dictated spelling and recognition of word-specific spellings among phonological foils. In Study 2 new groups received either a second year of manuscript (N = 29, M = 7 years 8 months, 16 girls) or introduction to cursive (joined) instruction in second grade (N = 24, M = 8 years 0 months, 11 girls) embedded in the Slingerland literacy program. ANCOVA with automatic alphabet writing as covariate showed that those who received a second year of manuscript handwriting instruction improved more on sustained handwriting over 30, 60, and 90 seconds than those who had had only one year of manuscript instruction; both groups improved in spelling and composing from the second to ninth month of second grade. Results are discussed in reference to mastering one handwriting format before introducing another format at a higher grade level and always embedding handwriting instruction in writing and reading instruction aimed at all levels of language.

  19. Multi-Modal Proteomic Analysis of Retinal Protein Expression Alterations in a Rat Model of Diabetic Retinopathy

    PubMed Central

    Kutzler, Lydia; Brucklacher, Robert M.; Bronson, Sarah K.; Kimball, Scot R.; Freeman, Willard M.

    2011-01-01

    Background As a leading cause of adult blindness, diabetic retinopathy is a prevalent and profound complication of diabetes. We have previously reported duration-dependent changes in retinal vascular permeability, apoptosis, and mRNA expression with diabetes in a rat model system. The aim of this study was to identify retinal proteomic alterations associated with functional dysregulation of the diabetic retina to better understand diabetic retinopathy pathogenesis and that could be used as surrogate endpoints in preclinical drug testing studies. Methodology/Principal Findings A multi-modal proteomic approach of antibody (Luminex)-, electrophoresis (DIGE)-, and LC-MS (iTRAQ)-based quantitation methods was used to maximize coverage of the retinal proteome. Transcriptomic profiling through microarray analysis was included to identify additional targets and assess potential regulation of protein expression changes at the mRNA level. The proteomic approaches proved complementary, with limited overlap in proteomic coverage. Alterations in pro-inflammatory, signaling and crystallin family proteins were confirmed by orthogonal methods in multiple independent animal cohorts. In an independent experiment, insulin replacement therapy normalized the expression of some proteins (Dbi, Anxa5) while other proteins (Cp, Cryba3, Lgals3, Stat3) were only partially normalized and Fgf2 and Crybb2 expression remained elevated. Conclusions/Significance These results expand the understanding of the changes in retinal protein expression occurring with diabetes and their responsiveness to normalization of blood glucose through insulin therapy. These proteins, especially those not normalized by insulin therapy, may also be useful in preclinical drug development studies. PMID:21249158

  20. Multi-Modal Homing in Sea Turtles: Modeling Dual Use of Geomagnetic and Chemical Cues in Island-Finding

    PubMed Central

    Endres, Courtney S.; Putman, Nathan F.; Ernst, David A.; Kurth, Jessica A.; Lohmann, Catherine M. F.; Lohmann, Kenneth J.

    2016-01-01

    Sea turtles are capable of navigating across large expanses of ocean to arrive at remote islands for nesting, but how they do so has remained enigmatic. An interesting example involves green turtles (Chelonia mydas) that nest on Ascension Island, a tiny land mass located approximately 2000 km from the turtles’ foraging grounds along the coast of Brazil. Sensory cues that turtles are known to detect, and which might hypothetically be used to help locate Ascension Island, include the geomagnetic field, airborne odorants, and waterborne odorants. One possibility is that turtles use magnetic cues to arrive in the vicinity of the island, then use chemical cues to pinpoint its location. As a first step toward investigating this hypothesis, we used oceanic, atmospheric, and geomagnetic models to assess whether magnetic and chemical cues might plausibly be used by turtles to locate Ascension Island. Results suggest that waterborne and airborne odorants alone are insufficient to guide turtles from Brazil to Ascension, but might permit localization of the island once turtles arrive in its vicinity. By contrast, magnetic cues might lead turtles into the vicinity of the island, but would not typically permit its localization because the field shifts gradually over time. Simulations reveal, however, that the sequential use of magnetic and chemical cues can potentially provide a robust navigational strategy for locating Ascension Island. Specifically, one strategy that appears viable is following a magnetic isoline into the vicinity of Ascension Island until an odor plume emanating from the island is encountered, after which turtles might either: (1) initiate a search strategy; or (2) follow the plume to its island source. These findings are consistent with the hypothesis that sea turtles, and perhaps other marine animals, use a multi-modal navigational strategy for locating remote islands. PMID:26941625

  1. Multi-modal data fusion using source separation: Two effective models based on ICA and IVA and their properties.

    PubMed

    Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D

    2015-09-01

    Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods.

  2. Treatment Needs of Driving While Intoxicated Offenders: The Need for a Multi-modal Approach to Treatment

    PubMed Central

    Mullen, Jillian; Ryan, Stacy R.; Mathias, Charles W.; Dougherty, Donald M.

    2015-01-01

    Objective This study aimed to characterize and compare the treatment needs of adults with driving while intoxicated (DWI) offenses recruited from a correctional residential treatment facility and the community to provide recommendations for treatment development. Method A total of 119 adults (59 Residential, 60 Community) with at least one DWI offense were administered clinical diagnostic interviews to assess substance use disorders and completed a battery of questionnaires assessing demographic characteristics, legal history, psychiatric diagnoses, medical diagnoses, and health care utilization. Results Almost all (96.6%) DWI offenders met clinical diagnostic criteria for an alcohol use disorder, approximately half of the sample also met diagnostic criteria for co-morbid substance use disorders and a substantial proportion also reported psychiatric and medical co-morbidities. However, a high percentage were not receiving treatment for these issues, most likely as a result of having limited access to care as the majority of participants had no current health insurance (64.45%) or primary care physician (74.0%). The residential sample had more extensive criminal histories compared to the community sample but was generally representative of the community in terms of their clinical characteristics. For instance, the groups did not differ in rates of substance use, psychiatric and medical health diagnoses or in the treatment of such issues, with the exception of alcohol abuse treatment. Conclusions DWI offenders represent a clinical population with high levels of complex and competing treatment needs which are not currently being met. Our findings demonstrate the need for standardized screening of DWI offenders and call for the development of a multi-modal treatment approach in efforts to reduce recidivism. PMID:25664371

  3. An Open-Source Approach for Catchment's Physiographic Characterization

    NASA Astrophysics Data System (ADS)

    Di Leo, M.; Di Stefano, M.

    2013-12-01

    A water catchment's hydrologic response is intimately linked to its morphological shape, which is a signature on the landscape of the particular climate conditions that generated the hydrographic basin over time. Furthermore, geomorphologic structures influence hydrologic regimes and land cover (vegetation). For these reasons, a basin's characterization is a fundamental element in hydrological studies. Physiographic descriptors have been extracted manually for long time, but currently Geographic Information System (GIS) tools ease such task by offering a powerful instrument for hydrologists to save time and improve accuracy of result. Here we present a program combining the flexibility of the Python programming language with the reliability of GRASS GIS, which automatically performing the catchment's physiographic characterization. GRASS (Geographic Resource Analysis Support System) is a Free and Open Source GIS, that today can look back on 30 years of successful development in geospatial data management and analysis, image processing, graphics and maps production, spatial modeling and visualization. The recent development of new hydrologic tools, coupled with the tremendous boost in the existing flow routing algorithms, reduced the computational time and made GRASS a complete toolset for hydrological analysis even for large datasets. The tool presented here is a module called r.basin, based on GRASS' traditional nomenclature, where the "r" stands for "raster", and it is available for GRASS version 6.x and more recently for GRASS 7. As input it uses a Digital Elevation Model and the coordinates of the outlet, and, powered by the recently developed r.stream.* hydrological tools, it performs the flow calculation, delimits the basin's boundaries and extracts the drainage network, returning the flow direction and accumulation, the distance to outlet and the hill slopes length maps. Based on those maps, it calculates hydrologically meaningful shape factors and

  4. Government Technology Acquisition Policy: The Case of Proprietary versus Open Source Software

    ERIC Educational Resources Information Center

    Hemphill, Thomas A.

    2005-01-01

    This article begins by explaining the concepts of proprietary and open source software technology, which are now competing in the marketplace. A review of recent individual and cooperative technology development and public policy advocacy efforts, by both proponents of open source software and advocates of proprietary software, subsequently…

  5. 76 FR 75875 - Defense Federal Acquisition Regulation Supplement; Open Source Software Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... Software Public Meeting AGENCY: Defense Acquisition Regulations System, Department of Defense (DoD). ACTION... regarding the use of open source software in DoD contracts. DATES: Public Meeting: January 12, 2012, from 10... for the discussions in the meeting. Please cite ``Public Meeting, DFARS--Open Source Software'' in...

  6. Build, Buy, Open Source, or Web 2.0?: Making an Informed Decision for Your Library

    ERIC Educational Resources Information Center

    Fagan, Jody Condit; Keach, Jennifer A.

    2010-01-01

    When improving a web presence, today's libraries have a choice: using a free Web 2.0 application, opting for open source, buying a product, or building a web application. This article discusses how to make an informed decision for one's library. The authors stress that deciding whether to use a free Web 2.0 application, to choose open source, to…

  7. When Free Isn't Free: The Realities of Running Open Source in School

    ERIC Educational Resources Information Center

    Derringer, Pam

    2009-01-01

    Despite the last few years' growth in awareness of open-source software in schools and the potential savings it represents, its widespread adoption is still hampered. Randy Orwin, technology director of the Bainbridge Island School District in Washington State and a strong open-source advocate, cautions that installing an open-source…

  8. Looking toward the Future: A Case Study of Open Source Software in the Humanities

    ERIC Educational Resources Information Center

    Quamen, Harvey

    2006-01-01

    In this article Harvey Quamen examines how the philosophy of open source software might be of particular benefit to humanities scholars in the near future--particularly for academic journals with limited financial resources. To this end he provides a case study in which he describes his use of open source technology (MySQL database software and…

  9. Open-Source Learning Management Systems: A Predictive Model for Higher Education

    ERIC Educational Resources Information Center

    van Rooij, S. Williams

    2012-01-01

    The present study investigated the role of pedagogical, technical, and institutional profile factors in an institution of higher education's decision to select an open-source learning management system (LMS). Drawing on the results of previous research that measured patterns of deployment of open-source software (OSS) in US higher education and…

  10. Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2007-01-01

    This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…

  11. LipidXplorer: a software for consensual cross-platform lipidomics.

    PubMed

    Herzog, Ronny; Schuhmann, Kai; Schwudke, Dominik; Sampaio, Julio L; Bornstein, Stefan R; Schroeder, Michael; Shevchenko, Andrej

    2012-01-01

    LipidXplorer is the open source software that supports the quantitative characterization of complex lipidomes by interpreting large datasets of shotgun mass spectra. LipidXplorer processes spectra acquired on any type of tandem mass spectrometers; it identifies and quantifies molecular species of any ionizable lipid class by considering any known or assumed molecular fragmentation pathway independently of any resource of reference mass spectra. It also supports any shotgun profiling routine, from high throughput top-down screening for molecular diagnostic and biomarker discovery to the targeted absolute quantification of low abundant lipid species. Full documentation on installation and operation of LipidXplorer, including tutorial, collection of spectra interpretation scripts, FAQ and user forum are available through the wiki site at: https://wiki.mpi-cbg.de/wiki/lipidx/index.php/Main_Page.

  12. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  13. Multi-modal defenses in aphids offer redundant protection and increased costs likely impeding a protective mutualism.

    PubMed

    Martinez, Adam J; Doremus, Matthew R; Kraft, Laura J; Kim, Kyungsun L; Oliver, Kerry M

    2017-04-05

    1.The pea aphid, Acyrthosiphon pisum, maintains extreme variation in resistance to its most common parasitoid wasp enemy, Aphidius ervi, which is sourced from two known mechanisms: protective bacterial symbionts, most commonly Hamiltonella defensa, or endogenously encoded defenses. We have recently found that individual aphids may employ each defense individually, occasionally both defenses together, or neither. 2.In field populations, Hamiltonella-infected aphids are found at low to moderate frequencies and while less is known about the frequency of resistant genotypes, they show up less often than susceptible genotypes in field collections. To better understand these patterns, we sought to compare the strengths and costs of both types of defense, individually and together, in order to elucidate the selective pressures that maintain multi-modal defense mechanisms or that may favor one over the other. 3.We experimentally infected five aphid genotypes (two lowly and three highly resistant), each with two symbiont strains, Hamiltonella-APSE8 (moderate protection) and Hamiltonella-APSE3 (high protection). This resulted in three sublines per genotype: uninfected, +APSE8, and +APSE3. Each of the fifteen total sublines was first subjected to a parasitism assay to determine its resistance phenotype and in a second experiment a subset were chosen to compare fitness (fecundity and survivorship) in presence and absence of parasitism. 4.In susceptible aphid genotypes, parasitized sublines infected with Hamiltonella generally showed increased protection with direct fitness benefits, but clear infection costs to fitness in the absence of parasitism. In resistant genotypes, Hamiltonella infection rarely conferred additional protection, often further reduced fecundity and survivorship when enemy challenged, and resulted in constitutive fitness costs in the absence of parasitism. We also identified strong aphid-genotype X symbiont-strain interactions, such that the best defensive

  14. Cross-Platform Graphical User Interface with fast 3-D Rendering for Particle-in-Cell Simulations

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David; Luetkemeyer, Kelly; Cary, John

    1999-11-01

    The Graphical User Interface (GUI) for XOOPIC (X11-based Object-Oriented Particle-in-Cell) is being ported to Qt, a cross-platform C++ windowing toolkit, thus permitting the code to run on PC's running both Windows 95/98/NT and Linux, as well as all commercial Unix platforms. All 3-D graphics will be handled through OpenGL, the cross-platform standard for fast 3-D rendering. The use of object-oriented design (OOD) techniques keeps the GUI/physics interface clean, and minimizes the impact of GUI development on the physics code. OOD also improves the maintainability and extensibility of large scientific simulation codes, while allowing for cross-platform portability and ready interchange of individual algorithms or entire physics kernels. Planned new GUI features include interactive modifications of the simulation parameters, including generation of a slowly-varying mesh and automatic updating of a corresponding input file. Improved modeling of high-power microwave tubes is one of the primary applications being targeted by this project.

  15. Open source tools and toolkits for bioinformatics: significance, and where are we?

    PubMed

    Stajich, Jason E; Lapp, Hilmar

    2006-09-01

    This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.

  16. Long-term quality of life after intensified multi-modality treatment of oral cancer including intra-arterial induction chemotherapy and adjuvant chemoradiation

    PubMed Central

    Kovács, Adorján F.; Stefenelli, Ulrich; Thorn, Gerrit

    2015-01-01

    Background: Quality of life (QoL) studies are well established when accompanying trials in head and neck cancer, but studies on long-term survivors are rare. Aims: The aim was to evaluate long-term follow-up patients treated with an intensified multi-modality therapy. Setting and Design: Cross-sectional study, tertiary care center. Patients and Methods: A total of 135 oral/oropharyngeal cancer survivors having been treated with an effective four modality treatment (intra-arterial induction chemotherapy, radical surgery, adjuvant radiation, concurrent systemic chemotherapy) filled European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 and HN35 questionnaires. Mean distance to treatment was 6.1 (1.3–16.6) years. Results were compared with a reference patient population (EORTC reference manual). In-study group comparison was also carried out. Statistical Analysis: One-sample t-test, Mann–Whitney-test, Kruskal–Wallis analysis. Results: QoL scores of both populations were well comparable. Global health status, cognitive and social functioning, fatigue, social eating, status of teeth, mouth opening and dryness, and sticky saliva were significantly worse in the study population; pain and need for pain killers, cough, need for nutritional support, problems with weight loss and gain were judged to be significantly less. Patients 1-year posttreatment had generally worse scores as compared to patients with two or more years distance to treatment. Complex reconstructive measures and adjuvant (chemo) radiation were main reasons for significant impairment of QoL. Conclusion Subjective disease status of patients following a maximized multi-modality treatment showed an expectable high degree of limitations, but was generally comparable to a reference group treated less intensively, suggesting that the administration of an intensified multi-modality treatment is feasible in terms of QoL/effectivity ratio. PMID:26389030

  17. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  18. Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.

    PubMed

    Zaidan, A A; Zaidan, B B; Al-Haiqi, Ahmed; Kiah, M L M; Hussain, Muzammil; Abdulnabi, Mohamed

    2015-02-01

    Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages.

  19. ORBKIT: A modular python toolbox for cross-platform postprocessing of quantum chemical wavefunction data.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe; Paulus, Beate; Hege, Hans-Christian; Schild, Axel

    2016-06-15

    ORBKIT is a toolbox for postprocessing electronic structure calculations based on a highly modular and portable Python architecture. The program allows computing a multitude of electronic properties of molecular systems on arbitrary spatial grids from the basis set representation of its electronic wavefunction, as well as several grid-independent properties. The required data can be extracted directly from the standard output of a large number of quantum chemistry programs. ORBKIT can be used as a standalone program to determine standard quantities, for example, the electron density, molecular orbitals, and derivatives thereof. The cornerstone of ORBKIT is its modular structure. The existing basic functions can be arranged in an individual way and can be easily extended by user-written modules to determine any other derived quantity. ORBKIT offers multiple output formats that can be processed by common visualization tools (VMD, Molden, etc.). Additionally, ORBKIT possesses routines to order molecular orbitals computed at different nuclear configurations according to their electronic character and to interpolate the wavefunction between these configurations. The program is open-source under GNU-LGPLv3 license and freely available at https://github.com/orbkit/orbkit/. This article provides an overview of ORBKIT with particular focus on its capabilities and applicability, and includes several example calculations. © 2016 Wiley Periodicals, Inc.

  20. Anatomy of BioJS, an open source community for the life sciences

    PubMed Central

    Yachdav, Guy; Goldberg, Tatyana; Wilzbach, Sebastian; Dao, David; Shih, Iris; Choudhary, Saket; Crouch, Steve; Franz, Max; García, Alexander; García, Leyla J; Grüning, Björn A; Inupakutika, Devasena; Sillitoe, Ian; Thanki, Anil S; Vieira, Bruno; Villaveces, José M; Schneider, Maria V; Lewis, Suzanna; Pettifer, Steve; Rost, Burkhard; Corpas, Manuel

    2015-01-01

    BioJS is an open source software project that develops visualization tools for different types of biological data. Here we report on the factors that influenced the growth of the BioJS user and developer community, and outline our strategy for building on this growth. The lessons we have learned on BioJS may also be relevant to other open source software projects. DOI: http://dx.doi.org/10.7554/eLife.07009.001 PMID:26153621

  1. Anatomy of BioJS, an open source community for the life sciences.

    PubMed

    Yachdav, Guy; Goldberg, Tatyana; Wilzbach, Sebastian; Dao, David; Shih, Iris; Choudhary, Saket; Crouch, Steve; Franz, Max; García, Alexander; García, Leyla J; Grüning, Björn A; Inupakutika, Devasena; Sillitoe, Ian; Thanki, Anil S; Vieira, Bruno; Villaveces, José M; Schneider, Maria V; Lewis, Suzanna; Pettifer, Steve; Rost, Burkhard; Corpas, Manuel

    2015-07-08

    BioJS is an open source software project that develops visualization tools for different types of biological data. Here we report on the factors that influenced the growth of the BioJS user and developer community, and outline our strategy for building on this growth. The lessons we have learned on BioJS may also be relevant to other open source software projects.

  2. Biosecurity and Open-Source Biology: The Promise and Peril of Distributed Synthetic Biological Technologies.

    PubMed

    Evans, Nicholas G; Selgelid, Michael J

    2015-08-01

    In this article, we raise ethical concerns about the potential misuse of open-source biology (OSB): biological research and development that progresses through an organisational model of radical openness, deskilling, and innovation. We compare this organisational structure to that of the open-source software model, and detail salient ethical implications of this model. We demonstrate that OSB, in virtue of its commitment to openness, may be resistant to governance attempts.

  3. Open-Source web-based geographical information system for health exposure assessment

    PubMed Central

    2012-01-01

    This paper presents the design and development of an open source web-based Geographical Information System allowing users to visualise, customise and interact with spatial data within their web browser. The developed application shows that by using solely Open Source software it was possible to develop a customisable web based GIS application that provides functions necessary to convey health and environmental data to experts and non-experts alike without the requirement of proprietary software. PMID:22233606

  4. Increasing Open Source Software Integration on the Department of Defense Unclassified Desktop

    DTIC Science & Technology

    2008-06-01

    military software, much of it is absorbed by license fees for computer operating systems and general-purpose office automation applications. Although...many available mature, robust Open Source Software (OSS) solutions. In particular, Linux-based operating systems have helped bring free, open source...thesis examines the feasibility of using OSS, particularly Linux-based operating systems , on unclassified DoD desktop computers. Specific attention is

  5. XTALOPT version r9: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Falls, Zackary; Lonie, David C.; Avery, Patrick; Shamp, Andrew; Zurek, Eva

    2016-02-01

    A new version of XTALOPT, an evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. XTALOPT is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. The new version incorporates many bug-fixes and new features, as detailed below.

  6. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  7. What's mine is yours-open source as a new paradigm for sustainable healthcare education.

    PubMed

    Ellaway, Rachel; Martin, Ross D

    2008-01-01

    Free and open access to information, and increasingly digital content and tools, is one of the defining characteristics of the Internet and as such it presents a challenge to traditional models of development and provision of educational materials and activities. Open source is a particular way of giving access to materials and processes in that the source material is available alongside the finished artifact, thereby allowing subsequent adaptation and redevelopment by anyone wishing to undertake the work. Open source is now being developed as a concept that can be applied in settings outside software development (Kelty 2005), and it is increasingly being linked to moral and ethical agendas about the nature of society itself (Lessig 2005). The open source movement also raises issues regarding authority challenging the role of the expert voice. The imperative of open source and associated economic and social factors all point to an opportunity-rich area for both reflection and development. This paper explores the open source phenomena and it will consider ways in which open source principles and ideas can benefit and extend the provision of a wide range of healthcare education services and activities.

  8. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave.

    PubMed

    Oosterhof, Nikolaas N; Connolly, Andrew C; Haxby, James V

    2016-01-01

    SMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA.

  9. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave

    PubMed Central

    Oosterhof, Nikolaas N.; Connolly, Andrew C.; Haxby, James V.

    2016-01-01

    SMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA PMID:27499741

  10. A Parametric Empirical Bayesian Framework for the EEG/MEG Inverse Problem: Generative Models for Multi-Subject and Multi-Modal Integration.

    PubMed

    Henson, Richard N; Wakeman, Daniel G; Litvak, Vladimir; Friston, Karl J

    2011-01-01

    We review recent methodological developments within a parametric empirical Bayesian (PEB) framework for reconstructing intracranial sources of extracranial electroencephalographic (EEG) and magnetoencephalographic (MEG) data under linear Gaussian assumptions. The PEB framework offers a natural way to integrate multiple constraints (spatial priors) on this inverse problem, such as those derived from different modalities (e.g., from functional magnetic resonance imaging, fMRI) or from multiple replications (e.g., subjects). Using variations of the same basic generative model, we illustrate the application of PEB to three cases: (1) symmetric integration (fusion) of MEG and EEG; (2) asymmetric integration of MEG or EEG with fMRI, and (3) group-optimization of spatial priors across subjects. We evaluate these applications on multi-modal data acquired from 18 subjects, focusing on energy induced by face perception within a time-frequency window of 100-220 ms, 8-18 Hz. We show the benefits of multi-modal, multi-subject integration in terms of the model evidence and the reproducibility (over subjects) of cortical responses to faces.

  11. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    SciTech Connect

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-02-02

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data.

  12. Open source drug discovery--a new paradigm of collaborative research in tuberculosis drug development.

    PubMed

    Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K

    2011-09-01

    It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery.

  13. Utilization of open source electronic health record around the world: A systematic review

    PubMed Central

    Aminpour, Farzaneh; Sadoughi, Farahnaz; Ahamdi, Maryam

    2014-01-01

    Many projects on developing Electronic Health Record (EHR) systems have been carried out in many countries. The current study was conducted to review the published data on the utilization of open source EHR systems in different countries all over the world. Using free text and keyword search techniques, six bibliographic databases were searched for related articles. The identified papers were screened and reviewed during a string of stages for the irrelevancy and validity. The findings showed that open source EHRs have been wildly used by source limited regions in all continents, especially in Sub-Saharan Africa and South America. It would create opportunities to improve national healthcare level especially in developing countries with minimal financial resources. Open source technology is a solution to overcome the problems of high-costs and inflexibility associated with the proprietary health information systems. PMID:24672566

  14. PolarSys: Maturity and Innovation for Open Source Tools for the Engineering of Embedded Systems

    NASA Astrophysics Data System (ADS)

    Blondelle, Gael; Arberet, Paul; Faudou, Raphael; Gaufillet, Pierre; Gerard, Sebastien; Langlois, Benoit; Mazzini, Silvia; Rossignol, Alain; Toupin, Dominique; Yang, Yves

    2013-08-01

    This paper presents PolarSys, the industrial open source community for the development and maturation of tools for the engineering of embedded systems. PolarSys was created in 2012 as an Eclipse Industry Working Group, a super community starting in the aerospace domain and quickly attracting other industry domains which rely a lot on embedded systems like Telecommunication PolarSys fosters open innovation to create better methods and tools, targets more computer assistance and automation in the development of complex and critical embedded systems, and addresses specific issues like tool qualification and support of long lasting missions. PolarSys not only provides a state of the art infrastructure for Open Source projects, but also implements specific processes to improve projects maturity and to organize a sustainable ecosystem where industrial users and open source providers work together.

  15. Open Data, Open Source and Open Standards in chemistry: The Blue Obelisk five years on

    PubMed Central

    2011-01-01

    Background The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data, Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistry research by promoting interoperability between chemistry software, encouraging cooperation between Open Source developers, and developing community resources and Open Standards. Results This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveys progress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry. Conclusions We show that the Blue Obelisk has been very successful in bringing together researchers and developers with common interests in ODOSOS, leading to development of many useful resources freely available to the chemistry community. PMID:21999342

  16. The Open Source DataTurbine Initiative: Streaming Data Middleware for Environmental Observing Systems

    NASA Technical Reports Server (NTRS)

    Fountain T.; Tilak, S.; Shin, P.; Hubbard, P.; Freudinger, L.

    2009-01-01

    The Open Source DataTurbine Initiative is an international community of scientists and engineers sharing a common interest in real-time streaming data middleware and applications. The technology base of the OSDT Initiative is the DataTurbine open source middleware. Key applications of DataTurbine include coral reef monitoring, lake monitoring and limnology, biodiversity and animal tracking, structural health monitoring and earthquake engineering, airborne environmental monitoring, and environmental sustainability. DataTurbine software emerged as a commercial product in the 1990 s from collaborations between NASA and private industry. In October 2007, a grant from the USA National Science Foundation (NSF) Office of Cyberinfrastructure allowed us to transition DataTurbine from a proprietary software product into an open source software initiative. This paper describes the DataTurbine software and highlights key applications in environmental monitoring.

  17. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may

  18. Open-source genomic analysis of Shiga-toxin-producing E. coli O104:H4.

    PubMed

    Rohde, Holger; Qin, Junjie; Cui, Yujun; Li, Dongfang; Loman, Nicholas J; Hentschke, Moritz; Chen, Wentong; Pu, Fei; Peng, Yangqing; Li, Junhua; Xi, Feng; Li, Shenghui; Li, Yin; Zhang, Zhaoxi; Yang, Xianwei; Zhao, Meiru; Wang, Peng; Guan, Yuanlin; Cen, Zhong; Zhao, Xiangna; Christner, Martin; Kobbe, Robin; Loos, Sebastian; Oh, Jun; Yang, Liang; Danchin, Antoine; Gao, George F; Song, Yajun; Li, Yingrui; Yang, Huanming; Wang, Jian; Xu, Jianguo; Pallen, Mark J; Wang, Jun; Aepfelbacher, Martin; Yang, Ruifu

    2011-08-25

    An outbreak caused by Shiga-toxin–producing Escherichia coli O104:H4 occurred in Germany in May and June of 2011, with more than 3000 persons infected. Here, we report a cluster of cases associated with a single family and describe an open-source genomic analysis of an isolate from one member of the family. This analysis involved the use of rapid, bench-top DNA sequencing technology, open-source data release, and prompt crowd-sourced analyses. In less than a week, these studies revealed that the outbreak strain belonged to an enteroaggregative E. coli lineage that had acquired genes for Shiga toxin 2 and for antibiotic resistance.

  19. Experiences with moving to open source standards for building and packaging

    NASA Astrophysics Data System (ADS)

    van Dok, D. H.; Sallé, M.; Koeroo, O. A.

    2014-06-01

    The LCMAPS family of grid security middleware was developed during a series of European grid projects from 2001 until 2013. Since 2009 we actively started to move away from ETICS, the project-specific build system, to common open-source tools for building and packaging, such as the GNU Autotools and the Fedora and Debian tool set. By following the guidelines of these mainstream distributions, and improving the source code to fit in with the commonly available open source tools, we have established low-cost, long term sustainability of the code base.

  20. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.