Sample records for source image analysis

  1. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  2. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  3. Open source tools for fluorescent imaging.

    PubMed

    Hamilton, Nicholas A

    2012-01-01

    As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. The Impact of a New Speckle Holography Analysis on the Galactic Center Orbits Initiative

    NASA Astrophysics Data System (ADS)

    Mangian, John; Ghez, Andrea; Gautam, Abhimat; Gallego, Laly; Schödel, Rainer; Lu, Jessica; Chen, Zhuo; UCLA Galactic Center Group; W.M. Keck Observatory Staff

    2018-01-01

    The Galactic Center Orbit Initiative has used two decades of high angular resolution imaging data from the W. M. Keck Observatory to make astrometric measurements of stellar motion around our Galaxy's central supermassive black hole. We present an analysis of a new approach to ten years of speckle imaging data (1995 - 2005) that has been processed with a new holography analysis. This analysis has (1) improved the image quality near the edge of the combined speckle frame and (2) increased the depth of the images and therefore increased the number of sources detected throughout the entire image. By directly comparing each holography analysis, we find a 41% increase in total detected sources and a 81% increase in sources further than 3" from the central black hole (SgrA*). Further, we find a 49% increase in sources of K-band magnitude greater than the old holography limiting magnitude due to the reduction of light halos surrounding bright sources.

  5. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  6. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  7. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  8. Studies of EGRET sources with a novel image restoration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi

    2007-07-12

    We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less

  9. BATSE imaging survey of the Galactic plane

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Barret, D.; Bloser, P. F.; Zhang, S. N.; Robinson, C.; Harmon, B. A.

    1997-01-01

    The burst and transient source experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO) provides all sky monitoring capability, occultation analysis and occultation imaging which enables new and fainter sources to be searched for in relatively crowded fields. The occultation imaging technique is used in combination with an automated BATSE image scanner, allowing an analysis of large data sets of occultation images for detections of candidate sources and for the construction of source catalogs and data bases. This automated image scanner system is being tested on archival data in order to optimize the search and detection thresholds. The image search system, its calibration results and preliminary survey results on archival data are reported on. The aim of the survey is to identify a complete sample of black hole candidates in the galaxy and constrain the number of black hole systems and neutron star systems.

  10. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  11. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  12. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  13. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  14. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  15. Image contrast of diffraction-limited telescopes for circular incoherent sources of uniform radiance

    NASA Technical Reports Server (NTRS)

    Shackleford, W. L.

    1980-01-01

    A simple approximate formula is derived for the background intensity beyond the edge of the image of uniform incoherent circular light source relative to the irradiance near the center of the image. The analysis applies to diffraction-limited telescopes with or without central beam obscuration due to a secondary mirror. Scattering off optical surfaces is neglected. The analysis is expected to be most applicable to spaceborne IR telescopes, for which diffraction can be the major source of off-axis response.

  16. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  17. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  18. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  19. CognitionMaster: an object-based image analysis framework

    PubMed Central

    2013-01-01

    Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542

  20. Putting tools in the toolbox: Development of a free, open-source toolbox for quantitative image analysis of porous media.

    NASA Astrophysics Data System (ADS)

    Iltis, G.; Caswell, T. A.; Dill, E.; Wilkins, S.; Lee, W. K.

    2014-12-01

    X-ray tomographic imaging of porous media has proven to be a valuable tool for investigating and characterizing the physical structure and state of both natural and synthetic porous materials, including glass bead packs, ceramics, soil and rock. Given that most synchrotron facilities have user programs which grant academic researchers access to facilities and x-ray imaging equipment free of charge, a key limitation or hindrance for small research groups interested in conducting x-ray imaging experiments is the financial cost associated with post-experiment data analysis. While the cost of high performance computing hardware continues to decrease, expenses associated with licensing commercial software packages for quantitative image analysis continue to increase, with current prices being as high as $24,000 USD, for a single user license. As construction of the Nation's newest synchrotron accelerator nears completion, a significant effort is being made here at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory (BNL), to provide an open-source, experiment-to-publication toolbox that reduces the financial and technical 'activation energy' required for performing sophisticated quantitative analysis of multidimensional porous media data sets, collected using cutting-edge x-ray imaging techniques. Implementation focuses on leveraging existing open-source projects and developing additional tools for quantitative analysis. We will present an overview of the software suite that is in development here at BNL including major design decisions, a demonstration of several test cases illustrating currently available quantitative tools for analysis and characterization of multidimensional porous media image data sets and plans for their future development.

  1. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  2. Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367

    NASA Technical Reports Server (NTRS)

    Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.

    1995-01-01

    We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.

  3. Ghost imaging with bucket detection and point detection

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  4. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  5. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  6. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  7. X-Ray Processing of ChaMPlane Fields: Methods and Initial Results for Selected Anti-Galactic Center Fields

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; van den Berg, Maureen; Schlegel, Eric M.; Grindlay, Jonathan E.; Koenig, Xavier; Laycock, Silas; Zhao, Ping

    2005-12-01

    We describe the X-ray analysis procedure of the ongoing Chandra Multiwavelength Plane (ChaMPlane) Survey and report the initial results from the analysis of 15 selected anti-Galactic center observations (90deg

  8. Thermal image analysis using the serpentine method

    NASA Astrophysics Data System (ADS)

    Koprowski, Robert; Wilczyński, Sławomir

    2018-03-01

    Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  10. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Analysis of spectrally resolved autofluorescence images by support vector machines

    NASA Astrophysics Data System (ADS)

    Mateasik, A.; Chorvat, D.; Chorvatova, A.

    2013-02-01

    Spectral analysis of the autofluorescence images of isolated cardiac cells was performed to evaluate and to classify the metabolic state of the cells in respect to the responses to metabolic modulators. The classification was done using machine learning approach based on support vector machine with the set of the automatically calculated features from recorded spectral profile of spectral autofluorescence images. This classification method was compared with the classical approach where the individual spectral components contributing to cell autofluorescence were estimated by spectral analysis, namely by blind source separation using non-negative matrix factorization. Comparison of both methods showed that machine learning can effectively classify the spectrally resolved autofluorescence images without the need of detailed knowledge about the sources of autofluorescence and their spectral properties.

  12. Exploiting Fission Chain Reaction Dynamics to Image Fissile Materials

    NASA Astrophysics Data System (ADS)

    Chapman, Peter Henry

    Radiation imaging is one potential method to verify nuclear weapons dismantlement. The neutron coded aperture imager (NCAI), jointly developed by Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL), is capable of imaging sources of fast (e.g., fission spectrum) neutrons using an array of organic scintillators. This work presents a method developed to discriminate between non-multiplying (i.e., non-fissile) neutron sources and multiplying (i.e., fissile) neutron sources using the NCAI. This method exploits the dynamics of fission chain-reactions; it applies time-correlated pulse-height (TCPH) analysis to identify neutrons in fission chain reactions. TCPH analyzes the neutron energy deposited in the organic scintillator vs. the apparent neutron time-of-flight. Energy deposition is estimated from light output, and time-of-flight is estimated from the time between the neutron interaction and the immediately preceding gamma interaction. Neutrons that deposit more energy than can be accounted for by their apparent time-of-flight are identified as fission chain-reaction neutrons, and the image is reconstructed using only these neutron detection events. This analysis was applied to measurements of weapons-grade plutonium (WGPu) metal and 252Cf performed at the Nevada National Security Site (NNSS) Device Assembly Facility (DAF) in July 2015. The results demonstrate it is possible to eliminate the non-fissile 252Cf source from the image while preserving the fissileWGPu source. TCPH analysis was also applied to additional scenes in which theWGPu and 252Cf sources were measured individually. The results of these separate measurements further demonstrate the ability to remove the non-fissile 252Cf source and retain the fissileWGPu source. Simulations performed using MCNPX-PoliMi indicate that in a one hour measurement, solid spheres ofWGPu are retained at a 1sigma level for neutron multiplications M -˜ 3.0 and above, while hollowWGPu spheres are retained for M -˜ 2.7 and above.

  13. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  14. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments

    PubMed Central

    Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina

    2016-01-01

    Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996

  15. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Amanda M.; Daly, Don S.; Willse, Alan R.

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  17. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2010-07-01

    The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public Advanced CCD Imaging Spectrometer imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents lsim30''. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of lsim1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wehrschuetz, M., E-mail: martin.wehrschuetz@klinikum-graz.at; Aschauer, M.; Portugaller, H.

    The purpose of this study was to assess interobserver variability and accuracy in the evaluation of renal artery stenosis (RAS) with gadolinium-enhanced MR angiography (MRA) and digital subtraction angiography (DSA) in patients with hypertension. The authors found that source images are more accurate than maximum intensity projection (MIP) for depicting renal artery stenosis. Two independent radiologists reviewed MRA and DSA from 38 patients with hypertension. Studies were postprocessed to display images in MIP and source images. DSA was the standard for comparison in each patient. For each main renal artery, percentage stenosis was estimated for any stenosis detected by themore » two radiologists. To calculate sensitivity, specificity and accuracy, MRA studies and stenoses were categorized as normal, mild (1-39%), moderate (40-69%) or severe ({>=}70%), or occluded. DSA stenosis estimates of 70% or greater were considered hemodynamically significant. Analysis of variance demonstrated that MIP estimates of stenosis were greater than source image estimates for both readers. Differences in estimates for MIP versus DSA reached significance in one reader. The interobserver variance for MIP, source images and DSA was excellent (0.80< {kappa}{<=} 0.90). The specificity of source images was high (97%) but less for MIP (87%); average accuracy was 92% for MIP and 98% for source images. In this study, source images are significantly more accurate than MIP images in one reader with a similar trend was observed in the second reader. The interobserver variability was excellent. When renal artery stenosis is a consideration, high accuracy can only be obtained when source images are examined.« less

  19. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  20. Report for 2011 from the Bordeaux IVS Analysis Center

    NASA Technical Reports Server (NTRS)

    Charlot, Patrick; Bellanger, Antoine; Bourda, Geraldine; Collioud, Arnaud; Baudry, Alain

    2012-01-01

    This report summarizes the activities of the Bordeaux IVS Analysis Center during the year 2011. The work focused on (i) regular analysis of the IVS-R1 and IVS-R4 sessions with the GINS software package; (ii) systematic VLBI imaging of the RDV sessions and calculation of the corresponding source structure index and compactness values; (iii) imaging of the sources observed during the 2009 International Year of Astronomy IVS observing session; and (iv) continuation of our VLBI observational program to identify optically-bright radio sources suitable for the link with the future Gaia frame. Also of importance is the enhancement of the IVS LiveWeb site which now comprises all IVS sessions back to 2003, allowing one to search past observations for session-specific information (e.g. sources or stations).

  1. Tamil Chola Bronzes and Swamimalai Legacy: Metal Sources and Archaeotechnology

    NASA Astrophysics Data System (ADS)

    Srinivasan, Sharada

    2016-08-01

    This review explores the great copper alloy image casting traditions of southern India from archaeometallurgical and ethnometallurgical perspectives. The usefulness of lead isotope ratio and compositional analysis in the finger-printing and art historical study of more than 130 early historic, Pallava, Chola, later Chola, and Vijayanagara sculptures (fifth-eighteenth centuries) is highlighted, including Nataraja, Buddha, Parvati, and Rama images made of copper, leaded bronze, brass, and gilt copper. Image casting traditions at Swamimalai in Tamil Nadu are compared with artistic treatises and with the technical examination of medieval bronzes, throwing light on continuities and changes in foundry practices. Western Indian sources could be pinpointed for a couple of medieval images from lead isotope analysis. Slag and archaeometallurgical investigations suggest the exploitation of some copper and lead-silver sources in the Andhra and Karnataka regions in the early historic Satavahana period and point to probable copper sources for the medieval images in Karnataka, Tamil Nadu, and Andhra Pradesh. The general lower iron content in southern Indian bronzes perhaps renders the proximal copper-magnetite reserves of Seruvila in Sri Lanka as a less likely source. Given the lack of lead deposits in Sri Lanka, however, the match of the lead isotope signatures of a well-known Ceylonese Buddhist Tara in British Museum with a Buddha image from Nagapattinam in Tamil Nadu may underscore ties between the island nation and the southern Indian Tamil regions.

  2. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions.

    PubMed

    Shenoy, Shailesh M

    2016-07-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software's support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity.

  3. Environmental Characterization for Target Acquisition. Report 2. Analysis of Thermal and Visible Imagery

    DTIC Science & Technology

    1993-11-01

    4 Im age M etrics .......................................... 8 Analysis Procedures .................................... 14 3...trgtI’oi4.1 top) then ter jit I" to ,amtqts -i do eno; A26 Appendx A Metices Image Processing S,)ftware Source Code AGANETRIC 4 OF 8 Vat 1.J. k., I I integer...A 4 •A--TIC - OF 8 Appendx A Wbkri Image Prooiing Software Source Code A31 AGACOMPT I OF 3

  4. Measurement of Vibrated Bulk Density of Coke Particle Blends Using Image Texture Analysis

    NASA Astrophysics Data System (ADS)

    Azari, Kamran; Bogoya-Forero, Wilinthon; Duchesne, Carl; Tessier, Jayson

    2017-09-01

    A rapid and nondestructive machine vision sensor was developed for predicting the vibrated bulk density (VBD) of petroleum coke particles based on image texture analysis. It could be used for making corrective adjustments to a paste plant operation to reduce green anode variability (e.g., changes in binder demand). Wavelet texture analysis (WTA) and gray level co-occurrence matrix (GLCM) algorithms were used jointly for extracting the surface textural features of coke aggregates from images. These were correlated with the VBD using partial least-squares (PLS) regression. Coke samples of several sizes and from different sources were used to test the sensor. Variations in the coke surface texture introduced by coke size and source allowed for making good predictions of the VBD of individual coke samples and mixtures of them (blends involving two sources and different sizes). Promising results were also obtained for coke blends collected from an industrial-baked carbon anode manufacturer.

  5. The analysis of complex mixed-radiation fields using near real-time imaging.

    PubMed

    Beaumont, Jonathan; Mellor, Matthew P; Joyce, Malcolm J

    2014-10-01

    A new mixed-field imaging system has been constructed at Lancaster University using the principles of collimation and back projection to passively locate and assess sources of neutron and gamma-ray radiation. The system was set up at the University of Manchester where three radiation sources: (252)Cf, a lead-shielded (241)Am/Be and a (22)Na source were imaged. Real-time discrimination was used to find the respective components of the neutron and gamma-ray fields detected by a single EJ-301 liquid scintillator, allowing separate images of neutron and gamma-ray emitters to be formed. (252)Cf and (22)Na were successfully observed and located in the gamma-ray image; however, the (241)Am/Be was not seen owing to surrounding lead shielding. The (252)Cf and (241)Am/Be neutron sources were seen clearly in the neutron image, demonstrating the advantage of this mixed-field technique over a gamma-ray-only image where the (241)Am/Be source would have gone undetected. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Digital Dental X-ray Database for Caries Screening

    NASA Astrophysics Data System (ADS)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  8. Enhanced Analysis Techniques for an Imaging Neutron and Gamma Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Madden, Amanda C.

    The presence of gamma rays and neutrons is a strong indicator of the presence of Special Nuclear Material (SNM). The imaging Neutron and gamma ray SPECTrometer (NSPECT) developed by the University of New Hampshire and Michigan Aerospace corporation detects the fast neutrons and prompt gamma rays from fissile material, and the gamma rays from radioactive material. The instrument operates as a double scatter device, requiring a neutron or a gamma ray to interact twice in the instrument. While this detection requirement decreases the efficiency of the instrument, it offers superior background rejection and the ability to measure the energy and momentum of the incident particle. These measurements create energy spectra and images of the emitting source for source identification and localization. The dual species instrument provides superior detection than a single species alone. In realistic detection scenarios, few particles are detected from a potential threat due to source shielding, detection at a distance, high background, and weak sources. This contributes to a small signal to noise ratio, and threat detection becomes difficult. To address these difficulties, several enhanced data analysis tools were developed. A Receiver Operating Characteristic Curve (ROC) helps set instrumental alarm thresholds as well as to identify the presence of a source. Analysis of a dual-species ROC curve provides superior detection capabilities. Bayesian analysis helps to detect and identify the presence of a source through model comparisons, and helps create a background corrected count spectra for enhanced spectroscopy. Development of an instrument response using simulations and numerical analyses will help perform spectra and image deconvolution. This thesis will outline the principles of operation of the NSPECT instrument using the double scatter technology, traditional analysis techniques, and enhanced analysis techniques as applied to data from the NSPECT instrument, and an outline of how these techniques can be used to superior detection of radioactive and fissile materials.

  9. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  11. Pilot Study of an Open-source Image Analysis Software for Automated Screening of Conventional Cervical Smears.

    PubMed

    Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal

    2018-01-01

    The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.

  12. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  13. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  14. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  15. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  16. An image analysis toolbox for high-throughput C. elegans assays

    PubMed Central

    Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H.; Riklin-Raviv, Tammy; Conery, Annie L.; O’Rourke, Eyleen J.; Sokolnicki, Katherine L.; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E.; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M.; Carpenter, Anne E.

    2012-01-01

    We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. PMID:22522656

  17. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  18. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  19. SU-G-201-16: Thermal Imaging in Source Visualization and Radioactivity Measurement for High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X; Lei, Y; Zheng, D

    2016-06-15

    Purpose: High Dose Rate (HDR) brachytherapy poses a special challenge to radiation safety and quality assurance (QA) due to its high radioactivity, and it is thus critical to verify the HDR source location and its radioactive strength. This study demonstrates a new method for measuring HDR source location and radioactivity utilizing thermal imaging. A potential application would relate to HDR QA and safety improvement. Methods: Heating effects by an HDR source were studied using Finite Element Analysis (FEA). Thermal cameras were used to visualize an HDR source inside a plastic applicator made of polyvinylidene difluoride (PVDF). Using different source dwellmore » times, correlations between the HDR source strength and heating effects were studied, thus establishing potential daily QA criteria using thermal imaging Results: For an Ir1?2 source with a radioactivity of 10 Ci, the decay-induced heating power inside the source is ∼13.3 mW. After the HDR source was extended into the PVDF applicator and reached thermal equilibrium, thermal imaging visualized the temperature gradient of 10 K/cm along the PVDF applicator surface, which agreed with FEA modeling. For Ir{sup 192} source activities ranging from 4.20–10.20 Ci, thermal imaging could verify source activity with an accuracy of 6.3% with a dwell time of 10 sec, and an accuracy of 2.5 % with 100 sec. Conclusion: Thermal imaging is a feasible tool to visualize HDR source dwell positions and verify source integrity. Patient safety and treatment quality will be improved by integrating thermal measurements into HDR QA procedures.« less

  20. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  2. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  3. An Open Source Agenda for Research Linking Text and Image Content Features.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  4. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  5. Design of a Borescope for Extravehicular Non-Destructive Applications

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic

    2003-01-01

    Anomalies such as corrosion, structural damage, misalignment, cracking, stress fiactures, pitting, or wear can be detected and monitored by the aid of a borescope. A borescope requires a source of light for proper operation. Today s current lighting technology market consists of incandescent lamps, fluorescent lamps and other types of electric arc and electric discharge vapor lamp. Recent advances in LED technology have made LEDs viable for a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. LEDs promise significant reduction in power consumption compared to other sources of light. This project focused on comparing images taken by the Olympus IPLEX, using two different light sources. One of the sources is the 50-W internal metal halide lamp and the other is a 1 W LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation [l]. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. Analysis using pattern recognition using Eigenfaces and Gaussian Pyramid in face recognition may be more useful.

  6. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  7. Neutron imaging data processing using the Mantid framework

    NASA Astrophysics Data System (ADS)

    Pouzols, Federico M.; Draper, Nicholas; Nagella, Sri; Yang, Erica; Sajid, Ahmed; Ross, Derek; Ritchie, Brian; Hill, John; Burca, Genoveva; Minniti, Triestino; Moreton-Smith, Christopher; Kockelmann, Winfried

    2016-09-01

    Several imaging instruments are currently being constructed at neutron sources around the world. The Mantid software project provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. At ISIS, IMAT (Imaging and Materials Science & Engineering) will offer unique time-of-flight neutron imaging techniques which impose several software requirements to control the data reduction and analysis. Here we outline the extensions currently being added to Mantid to provide specific support for neutron imaging requirements.

  8. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  9. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  10. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  11. PLUS: open-source toolkit for ultrasound-guided intervention systems.

    PubMed

    Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor

    2014-10-01

    A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

  12. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

  13. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less

  14. Innovations in the Analysis of Chandra-ACIS Observations

    NASA Astrophysics Data System (ADS)

    Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.

    2010-05-01

    As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.

  15. Open source software projects of the caBIG In Vivo Imaging Workspace Software special interest group.

    PubMed

    Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence

    2007-11-01

    The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.

  16. Development of an ultralow-light-level luminescence image analysis system for dynamic measurements of transcriptional activity in living and migrating cells.

    PubMed

    Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B

    2000-04-10

    We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.

  17. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  18. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    PubMed

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  20. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  1. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less

  2. Cellular Consequences of Telomere Shortening in Histologically Normal Breast Tissues

    DTIC Science & Technology

    2013-09-01

    using the open source, JAVA -based image analysis software package ImageJ (http://rsb.info.nih.gov/ij/) and a custom designed plugin (“Telometer...Tabulated data were stored in a MySQL (http://www.mysql.com) database and viewed through Microsoft Access (Microsoft Corp.). Statistical Analysis For

  3. Open source bioimage informatics for cell biology

    PubMed Central

    Swedlow, Jason R.; Eliceiri, Kevin W.

    2009-01-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518

  4. Acquisition of Earth Science Remote Sensing Observations from Commercial Sources: Lessons Learned from the Space Imaging IKONOS Example

    NASA Technical Reports Server (NTRS)

    Goward, Samuel N.; Townshend, John R.; Zanoni, Vicki; Policelli, Fritz; Stanley, Tom; Ryan, Robert; Holekamp, Kara; Underwood, Lauren; Pagnutti, Mary; Fletcher, Rose

    2003-01-01

    In an effort to more full explore the potential of commercial remotely sensed land data sources, the NASA Earth Science Enterprise (ESE) implemented an experimental Scientific Data Purchase (SDP) that solicited bids from the private sector to meet ESE-user data needs. The images from the Space Imaging IKONOS system provided a particularly good match to the current ESE missions such as Terra and Landsat 7 and therefore serve as a focal point in this analysis.

  5. BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis.

    PubMed

    Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren

    2017-10-01

    BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.

  6. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  7. Acoustics Reflections of Full-Scale Rotor Noise Measurements in NFAC 40- by 80-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Barbely, Natasha Lydia; Kitaplioglu, Cahit; Sim, Ben W.

    2012-01-01

    The objective of current research is to identify the extent of acoustic time history distortions due to wind tunnel wall reflections. Acoustic measurements from the recent full-scale Boeing-SMART rotor test (Fig. 2) will be used to illustrate the quality of noise measurement in the NFAC 40- by 80-Foot Wind Tunnel test section. Results will be compared to PSU-WOPWOP predictions obtained with and without adjustments due to sound reflections off wind tunnel walls. Present research assumes a rectangular enclosure as shown in Fig. 3a. The Method of Mirror Images7 is used to account for reflection sources and their acoustic paths by introducing mirror images of the rotor (i.e. acoustic source), at each and every wall surface, to enforce a no-flow boundary condition at the position of the physical walls (Fig. 3b). While conventional approach evaluates the "combined" noise from both the source and image rotor at a single microphone position, an alternative approach is used to simplify implementation of PSU-WOPWOP for this reflection analysis. Here, an "equivalent" microphone position is defined with respect to the source rotor for each mirror image that effectively renders the reflection analysis to be a one rotor, multiple microphones problem. This alternative approach has the advantage of allowing each individual "equivalent" microphone, representing the reflection pulse from the associated wall surface, to be adjusted by the panel absorption coefficient illustrated in Fig. 1a. Note that the presence of parallel wall surfaces requires an infinite number of mirror images (Fig. 3c) to satisfy the no-flow boundary conditions. In the present analysis, up to four mirror images (per wall surface) are accounted to achieve convergence in the predicted time histories

  8. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) was published in 2009 March, and includes information about 94,676 X-ray sources detected in a subset of public ACIS imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents <˜30''.The CSC is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime.The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports medium sophistication scientific analysis on using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools; and (4) includes real X-ray sources detected with flux significance greater than a predefined threshold, while maintaining the number of spurious sources at an acceptable level. For each detected X-ray source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

  9. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  10. An External Matrix-Assisted Laser Desorption Ionization Source for Flexible FT-ICR Mass Spectrometry Imaging with Internal Calibration on Adjacent Samples

    NASA Astrophysics Data System (ADS)

    Smith, Donald F.; Aizikov, Konstantin; Duursma, Marc C.; Giskes, Frans; Spaanderman, Dirk-Jan; McDonnell, Liam A.; O'Connor, Peter B.; Heeren, Ron M. A.

    2011-01-01

    We describe the construction and application of a new MALDI source for FT-ICR mass spectrometry imaging. The source includes a translational X-Y positioning stage with a 10 × 10 cm range of motion for analysis of large sample areas, a quadrupole for mass selection, and an external octopole ion trap with electrodes for the application of an axial potential gradient for controlled ion ejection. An off-line LC MALDI MS/MS run demonstrates the utility of the new source for data- and position-dependent experiments. A FT-ICR MS imaging experiment of a coronal rat brain section yields ˜200 unique peaks from m/z 400-1100 with corresponding mass-selected images. Mass spectra from every pixel are internally calibrated with respect to polymer calibrants collected from an adjacent slide.

  11. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  12. A dedicated cone-beam CT system for musculoskeletal extremities imaging: design, optimization, and initial performance characterization.

    PubMed

    Zbijewski, W; De Jean, P; Prakash, P; Ding, Y; Stayman, J W; Packard, N; Senn, R; Yang, D; Yorkston, J; Machado, A; Carrino, J A; Siewerdsen, J H

    2011-08-01

    This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a -55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm3 field of view); total acquisition arc of -240 degrees. The system MTF declines to 50% at -1.3 mm(-1) and to 10% at -2.7 mm(-1), consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from -500 projections at less than -0.5 kW power, implying -6.4 mGy (0.064 mSv) for low-dose protocols and -15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.

  13. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    PubMed Central

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a ∼55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 × 20 × 20 cm3 field of view); total acquisition arc of ∼240°. The system MTF declines to 50% at ∼1.3 mm−1 and to 10% at ∼2.7 mm−1, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from ∼500 projections at less than ∼0.5 kW power, implying ∼6.4 mGy (0.064 mSv) for low-dose protocols and ∼15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10–20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography. PMID:21928644

  14. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zbijewski, W.; De Jean, P.; Prakash, P.

    2011-08-15

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified themore » following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a {approx}55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm{sup 3} field of view); total acquisition arc of {approx}240 deg. The system MTF declines to 50% at {approx}1.3 mm{sup -1} and to 10% at {approx}2.7 mm{sup -1}, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from {approx}500 projections at less than {approx}0.5 kW power, implying {approx}6.4 mGy (0.064 mSv) for low-dose protocols and {approx}15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.« less

  15. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  16. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  17. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  18. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Automated source classification of new transient sources

    NASA Astrophysics Data System (ADS)

    Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.

    2017-10-01

    The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.

  20. Corneal topography with high-speed swept source OCT in clinical examination

    PubMed Central

    Karnowski, Karol; Kaluzny, Bartlomiej J.; Szkulmowski, Maciej; Gora, Michalina; Wojtkowski, Maciej

    2011-01-01

    We present the applicability of high-speed swept source (SS) optical coherence tomography (OCT) for quantitative evaluation of the corneal topography. A high-speed OCT device of 108,000 lines/s permits dense 3D imaging of the anterior segment within a time period of less than one fourth of second, minimizing the influence of motion artifacts on final images and topographic analysis. The swept laser performance was specially adapted to meet imaging depth requirements. For the first time to our knowledge the results of a quantitative corneal analysis based on SS OCT for clinical pathologies such as keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty are presented. Additionally, a comparison with widely used commercial systems, a Placido-based topographer and a Scheimpflug imaging-based topographer, is demonstrated. PMID:21991558

  1. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    PubMed

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  2. Analysis of Magnetic Resonance Image Signal Fluctuations Acquired During MR-Guided Radiotherapy.

    PubMed

    Breto, Adrian L; Padgett, Kyle R; Ford, John C; Kwon, Deukwoo; Chang, Channing; Fuss, Martin; Stoyanova, Radka; Mellon, Eric A

    2018-03-28

    Magnetic resonance-guided radiotherapy (MRgRT) is a new and evolving treatment modality that allows unprecedented visualization of the tumor and surrounding anatomy. MRgRT includes daily 3D magnetic resonance imaging (MRI) for setup and rapidly repeated near real-time MRI scans during treatment for target tracking. One of the more exciting potential benefits of MRgRT is the ability to analyze serial MRIs to monitor treatment response or predict outcomes. A typical radiation treatment (RT) over the span of 10-15 minutes on the MRIdian system (ViewRay, Cleveland, OH) yields thousands of "cine" images, each acquired in 250 ms. This unique data allows for a glimpse in image intensity changes during RT delivery. In this report, we analyze cine images from a single fraction RT of a glioblastoma patient on the ViewRay platform in order to characterize the dynamic signal changes occurring during RT therapy. The individual frames in the cines were saved into DICOM format and read into an MIM image analysis platform (MIM Software, Cleveland, OH) as a time series. The three possible states of the three Cobalt-60 radiation sources-OFF, READY, and ON-were also recorded. An in-house Java plugin for MIM was created in order to perform principal component analysis (PCA) on each of the datasets. The analysis resulted in first PC, related to monotonous signal increase over the course of the treatment fraction. We found several distortion patterns in the data that we postulate result from the perturbation of the magnetic field due to the moving metal parts in the platform while treatment was being administered. The largest variations were detected when all Cobalt-60 sources were OFF. During this phase of the treatment, the gantry and multi-leaf collimators (MLCs) are moving. Conversely, when all Cobalt-60 sources were in the ON position, the image signal fluctuations were minimal, relating to very little mechanical motion. At this phase, the gantry, the MLCs, and sources are fixed in their positions. These findings were confirmed in a study with the daily quality assurance (QA) phantom. While the identified variations were not related to physiological processes, our findings confirm the sensitivity of the developed approach to identify very small fluctuations. Relating these variations to the physical changes that occur during treatment shows the methodical ability of the technique to uncover their underlying sources.

  3. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

    PubMed Central

    Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542

  4. WE-D-204-06: An Open Source ImageJ CatPhan Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, G

    2015-06-15

    Purpose: The CatPhan is a popular QA device for assessing CT image quality. There are a number of software options which perform analysis of the CatPhan. However, there is often little ability for the user to adjust the analysis if it isn’t running properly, and these are all expensive options. An open source tool is an effective solution. Methods: To use the software, the user imports the CT as an image sequence in ImageJ. The user then scrolls to the slice with the lateral dots. The user then runs the plugin. If tolerance constraints are not already created, the usermore » is prompted to enter them or to use generic tolerances. Upon completion of the analysis, the plugin calls pdfLaTex to compile the pdf report. There is a csv version of the report as well. A log of the results from all CatPhan scans is kept as a csv file. The user can use this to baseline the machine. Results: The tool is capable of detecting the orientation of the phantom. If the CatPhan was scanned backwards, one can simply flip the stack of images horizontally and proceed with the analysis. The analysis includes Sensitometry (estimating the effective beam energy), HU values and linearity, Low Contrast Visibility (using LDPE & Polystyrene), Contrast Scale, Geometric Accuracy, Slice Thickness Accuracy, Spatial resolution (giving the MTF using the line pairs as well as the point spread function), CNR, Low Contrast Detectability (including the raw data), Uniformity (including the Cupping Effect). Conclusion: This is a robust tool that analyzes more components of the CatPhan than other software options (with the exception of ImageOwl). It produces an elegant pdf and keeps a log of analyses for long-term tracking of the system. Because it is open source, users are able to customize any component of it.« less

  5. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    PubMed

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  6. Noninvasive Visualization and Analysis of the Human Parafoveal Capillary Network Using Swept Source OCT Optical Microangiography.

    PubMed

    Kuehlewein, Laura; Tepelus, Tudor C; An, Lin; Durbin, Mary K; Srinivas, Sowmya; Sadda, Srinivas R

    2015-06-01

    We characterized the foveal avascular zone (FAZ) and the parafoveal capillary network in healthy subjects using swept source OCT optical microangiography (OMAG). We acquired OMAG images of the macula of 19 eyes (13 healthy individuals) using a prototype swept source laser OCT. En face images of the retinal vasculature were generated for superficial and deep inner retinal layers (SRL/DRL) in regions of interest 250 (ROI-250) and 500 (ROI-500) μm from the FAZ border. The mean area (mm2) of the FAZ was 0.304 ± 0.132 for the SRL and 0.486 ± 0.162 for the DRL (P < 0.001). Mean vessel density (%) was 67.3 ± 6.4 for the SRL and 34.5 ± 8.6 for the DRL in the ROI-250 (P < 0.001), and 74.2 ± 3.9 for the SRL and 72.3 ± 4.9 for the DRL in the ROI-500 (P = 0.160). Swept source OMAG images of healthy subjects allowed analysis of the FAZ and the density of the parafoveal capillary network at different retinal layers.

  7. Velocity analysis of simultaneous-source data using high-resolution semblance—coping with the strong noise

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Qu, Shan; Zu, Shaohuan

    2016-02-01

    Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the normal moveout-based velocity picking approach.We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus it is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack Kirchhoff time migration approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.

  8. The Use of Narrative Paradigm Theory in Assessing Audience Value Conflict in Image Advertising.

    ERIC Educational Resources Information Center

    Stutts, Nancy B.; Barker, Randolph T.

    1999-01-01

    Presents an analysis of image advertisement developed from Narrative Paradigm Theory. Suggests that the nature of postmodern culture makes image advertising an appropriate external communication strategy for generating stake holder loyalty. Suggests that Narrative Paradigm Theory can identify potential sources of audience conflict by illuminating…

  9. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm

    NASA Astrophysics Data System (ADS)

    Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.

  10. Transmission ultrasonography. [time delay spectrometry for soft tissue transmission imaging

    NASA Technical Reports Server (NTRS)

    Heyser, R. C.; Le Croissette, D. H.

    1973-01-01

    Review of the results of the application of an advanced signal-processing technique, called time delay spectrometry, in obtaining soft tissue transmission images by transmission ultrasonography, both in vivo and in vitro. The presented results include amplitude ultrasound pictures and phase ultrasound pictures obtained by this technique. While amplitude ultrasonographs of tissue are closely analogous to X-ray pictures in that differential absorption is imaged, phase ultrasonographs represent an entirely new source of information based on differential time of propagation. Thus, a new source of information is made available for detailed analysis.

  11. The Utility of the Extended Images in Ambient Seismic Wavefield Migration

    NASA Astrophysics Data System (ADS)

    Girard, A. J.; Shragge, J. C.

    2015-12-01

    Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.

  12. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    NASA Astrophysics Data System (ADS)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  13. Software-based measurement of thin filament lengths: an open-source GUI for Distributed Deconvolution analysis of fluorescence images

    PubMed Central

    Gokhin, David S.; Fowler, Velia M.

    2016-01-01

    The periodically arranged thin filaments within the striated myofibrils of skeletal and cardiac muscle have precisely regulated lengths, which can change in response to developmental adaptations, pathophysiological states, and genetic perturbations. We have developed a user-friendly, open-source ImageJ plugin that provides a graphical user interface (GUI) for super-resolution measurement of thin filament lengths by applying Distributed Deconvolution (DDecon) analysis to periodic line scans collected from fluorescence images. In the workflow presented here, we demonstrate thin filament length measurement using a phalloidin-stained cryosection of mouse skeletal muscle. The DDecon plugin is also capable of measuring distances of any periodically localized fluorescent signal from the Z- or M-line, as well as distances between successive Z- or M-lines, providing a broadly applicable tool for quantitative analysis of muscle cytoarchitecture. These functionalities can also be used to analyze periodic fluorescence signals in nonmuscle cells. PMID:27644080

  14. Open Source High Content Analysis Utilizing Automated Fluorescence Lifetime Imaging Microscopy.

    PubMed

    Görlitz, Frederik; Kelly, Douglas J; Warren, Sean C; Alibhai, Dominic; West, Lucien; Kumar, Sunil; Alexandrov, Yuriy; Munro, Ian; Garcia, Edwin; McGinty, James; Talbot, Clifford; Serwa, Remigiusz A; Thinon, Emmanuelle; da Paola, Vincenzo; Murray, Edward J; Stuhmeier, Frank; Neil, Mark A A; Tate, Edward W; Dunsby, Christopher; French, Paul M W

    2017-01-18

    We present an open source high content analysis instrument utilizing automated fluorescence lifetime imaging (FLIM) for assaying protein interactions using Förster resonance energy transfer (FRET) based readouts of fixed or live cells in multiwell plates. This provides a means to screen for cell signaling processes read out using intramolecular FRET biosensors or intermolecular FRET of protein interactions such as oligomerization or heterodimerization, which can be used to identify binding partners. We describe here the functionality of this automated multiwell plate FLIM instrumentation and present exemplar data from our studies of HIV Gag protein oligomerization and a time course of a FRET biosensor in live cells. A detailed description of the practical implementation is then provided with reference to a list of hardware components and a description of the open source data acquisition software written in µManager. The application of FLIMfit, an open source MATLAB-based client for the OMERO platform, to analyze arrays of multiwell plate FLIM data is also presented. The protocols for imaging fixed and live cells are outlined and a demonstration of an automated multiwell plate FLIM experiment using cells expressing fluorescent protein-based FRET constructs is presented. This is complemented by a walk-through of the data analysis for this specific FLIM FRET data set.

  15. Open Source High Content Analysis Utilizing Automated Fluorescence Lifetime Imaging Microscopy

    PubMed Central

    Warren, Sean C.; Alibhai, Dominic; West, Lucien; Kumar, Sunil; Alexandrov, Yuriy; Munro, Ian; Garcia, Edwin; McGinty, James; Talbot, Clifford; Serwa, Remigiusz A.; Thinon, Emmanuelle; da Paola, Vincenzo; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Tate, Edward W.; Dunsby, Christopher; French, Paul M. W.

    2017-01-01

    We present an open source high content analysis instrument utilizing automated fluorescence lifetime imaging (FLIM) for assaying protein interactions using Förster resonance energy transfer (FRET) based readouts of fixed or live cells in multiwell plates. This provides a means to screen for cell signaling processes read out using intramolecular FRET biosensors or intermolecular FRET of protein interactions such as oligomerization or heterodimerization, which can be used to identify binding partners. We describe here the functionality of this automated multiwell plate FLIM instrumentation and present exemplar data from our studies of HIV Gag protein oligomerization and a time course of a FRET biosensor in live cells. A detailed description of the practical implementation is then provided with reference to a list of hardware components and a description of the open source data acquisition software written in µManager. The application of FLIMfit, an open source MATLAB-based client for the OMERO platform, to analyze arrays of multiwell plate FLIM data is also presented. The protocols for imaging fixed and live cells are outlined and a demonstration of an automated multiwell plate FLIM experiment using cells expressing fluorescent protein-based FRET constructs is presented. This is complemented by a walk-through of the data analysis for this specific FLIM FRET data set. PMID:28190060

  16. Imagens de Leitura na Literatura de Cordel (Images of Reading in "Cordel" Literature).

    ERIC Educational Resources Information Center

    Hata, Luli

    1997-01-01

    Shows, in "Cordel" literature (a popular manifestation found in northeastern Brazil) an expressive source for the analysis of popular culture in Brazil. Uses this literature to discuss images of reading. (PA)

  17. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  18. Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography

    NASA Astrophysics Data System (ADS)

    Weng, Jiawen; Clark, David C.; Kim, Myung K.

    2016-05-01

    A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.

  19. Resolving z ~2 galaxy using adaptive coadded source plane reconstruction

    NASA Astrophysics Data System (ADS)

    Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian

    2018-06-01

    Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.

  20. Glow discharge sources for atomic and molecular analyses

    NASA Astrophysics Data System (ADS)

    Storey, Andrew Patrick

    Two types of glow discharges were used and characterized for chemical analysis. The flowing atmospheric pressure afterglow (FAPA) source, based on a helium glow discharge (GD), was utilized to analyze samples with molecular mass spectrometry. A second GD, operated at reduced pressure in argon, was employed to map the elemental composition of a solid surface with novel optical detection systems, enabling new applications and perspectives for GD emission spectrometry. Like many plasma-based ambient desorption-ionization sources being used around the world, the FAPA requires a supply of helium to operate effectively. With increased pressures on global helium supply and pricing, the use of an interrupted stream of helium for analysis was explored for vapor and solid samples. In addition to the mass spectra generated by the FAPA source, schlieren imaging and infrared thermography were employed to map the behavior of the source and its surroundings under the altered conditions. Additionally, a new annular microplasma variation of the FAPA source was developed and characterized. A spectroscopic imaging system that utilized an adjustable-tilt interference filter was used to map the elemental composition of a sample surface by glow discharge emission spectroscopy. This apparatus was compared to other GD imaging techniques for mapping elemental surface composition. The wide bandpass filter resulted in significant spectral interferences that could be partially overcome with chemometric data processing. Because time-resolved GD emission spectroscopy can provide fine depth-profiling measurements, a natural extension of GD imaging would be its application to three-dimensional characterization of samples. However, the simultaneous cathodic sputtering that occur across the sample results in a sampling process that is not completely predictable. These issues are frequently encountered when laterally varied samples are explored with glow discharge imaging techniques. These insights are described with respect to their consequences for both imaging and conventional GD spectroscopic techniques.

  1. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  2. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  3. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  4. Acoustic Source Analysis of Magnetoacoustic Tomography With Magnetic Induction for Conductivity Gradual-Varying Tissues.

    PubMed

    Wang, Jiawei; Zhou, Yuqi; Sun, Xiaodong; Ma, Qingyu; Zhang, Dong

    2016-04-01

    As a multiphysics imaging approach, magnetoacoustic tomography with magnetic induction (MAT-MI) works on the physical mechanism of magnetic excitation, acoustic vibration, and transmission. Based on the theoretical analysis of the source vibration, numerical studies are conducted to simulate the pathological changes of tissues for a single-layer cylindrical conductivity gradual-varying model and estimate the strengths of sources inside the model. The results suggest that the inner source is generated by the product of the conductivity and the curl of the induced electric intensity inside conductivity homogeneous medium, while the boundary source is produced by the cross product of the gradient of conductivity and the induced electric intensity at conductivity boundary. For a biological tissue with low conductivity, the strength of boundary source is much higher than that of the inner source only when the size of conductivity transition zone is small. In this case, the tissue can be treated as a conductivity abrupt-varying model, ignoring the influence of inner source. Otherwise, the contributions of inner and boundary sources should be evaluated together quantitatively. This study provide basis for further study of precise image reconstruction of MAT-MI for pathological tissues.

  5. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  6. Retinal fundus images for glaucoma analysis: the RIGA dataset

    NASA Astrophysics Data System (ADS)

    Almazroa, Ahmed; Alodhayb, Sami; Osman, Essameldin; Ramadan, Eslam; Hummadi, Mohammed; Dlaim, Mohammed; Alkatee, Muhannad; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2018-03-01

    Glaucoma neuropathy is a major cause of irreversible blindness worldwide. Current models of chronic care will not be able to close the gap of growing prevalence of glaucoma and challenges for access to healthcare services. Teleophthalmology is being developed to close this gap. In order to develop automated techniques for glaucoma detection which can be used in tele-ophthalmology we have developed a large retinal fundus dataset. A de-identified dataset of retinal fundus images for glaucoma analysis (RIGA) was derived from three sources for a total of 750 images. The optic cup and disc boundaries for each image was marked and annotated manually by six experienced ophthalmologists and included the cup to disc (CDR) estimates. Six parameters were extracted and assessed (the disc area and centroid, cup area and centroid, horizontal and vertical cup to disc ratios) among the ophthalmologists. The inter-observer annotations were compared by calculating the standard deviation (SD) for every image between the six ophthalmologists in order to determine if the outliers amongst the six and was used to filter the corresponding images. The data set will be made available to the research community in order to crowd source other analysis from other research groups in order to develop, validate and implement analysis algorithms appropriate for tele-glaucoma assessment. The RIGA dataset can be freely accessed online through University of Michigan, Deep Blue website (doi:10.7302/Z23R0R29).

  7. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  8. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  9. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  10. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  11. Cluster secondary ion mass spectrometry microscope mode mass spectrometry imaging.

    PubMed

    Kiss, András; Smith, Donald F; Jungmann, Julia H; Heeren, Ron M A

    2013-12-30

    Microscope mode imaging for secondary ion mass spectrometry is a technique with the promise of simultaneous high spatial resolution and high-speed imaging of biomolecules from complex surfaces. Technological developments such as new position-sensitive detectors, in combination with polyatomic primary ion sources, are required to exploit the full potential of microscope mode mass spectrometry imaging, i.e. to efficiently push the limits of ultra-high spatial resolution, sample throughput and sensitivity. In this work, a C60 primary source was combined with a commercial mass microscope for microscope mode secondary ion mass spectrometry imaging. The detector setup is a pixelated detector from the Medipix/Timepix family with high-voltage post-acceleration capabilities. The system's mass spectral and imaging performance is tested with various benchmark samples and thin tissue sections. The high secondary ion yield (with respect to 'traditional' monatomic primary ion sources) of the C60 primary ion source and the increased sensitivity of the high voltage detector setup improve microscope mode secondary ion mass spectrometry imaging. The analysis time and the signal-to-noise ratio are improved compared with other microscope mode imaging systems, all at high spatial resolution. We have demonstrated the unique capabilities of a C60 ion microscope with a Timepix detector for high spatial resolution microscope mode secondary ion mass spectrometry imaging. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Hyperspectral fluorescence imaging using violet LEDs as excitation sources for fecal matter contaminate identification on spinach leaves

    USDA-ARS?s Scientific Manuscript database

    Food safety in the production of fresh produce for human consumption is a worldwide issue and needs to be addressed to decrease foodborne illnesses and resulting costs. Hyperspectral fluorescence imaging coupled with multivariate image analysis techniques for detection of fecal contaminates on spina...

  13. Image digitising and analysis of outflows from young stars

    NASA Astrophysics Data System (ADS)

    Zealey, W. J.; Mader, S. L.

    1997-08-01

    We present IIIaJ, IIIaF and IVN band images of Herbig-Haro objects digitised from the ESO/SERC Southern Sky Survey plates. These form part of a digital image database of southern HH objects, which allows the identification of emission and reflection nebulosity and the location of the obscured sources of outflows.

  14. Use of Interrupted Helium Flow in the Analysis of Vapor Samples with Flowing Atmospheric-Pressure Afterglow-Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Storey, Andrew P.; Zeiri, Offer M.; Ray, Steven J.; Hieftje, Gary M.

    2017-02-01

    The flowing atmospheric-pressure afterglow (FAPA) source was used for the mass-spectrometric analysis of vapor samples introduced between the source and mass spectrometer inlet. Through interrupted operation of the plasma-supporting helium flow, helium consumption is greatly reduced and dynamic gas behavior occurs that was characterized by schlieren imaging. Moreover, mass spectra acquired immediately after the onset of helium flow exhibit a signal spike before declining and ultimately reaching a steady level. This initial signal appears to be due to greater interaction of sample vapor with the afterglow of the source when helium flow resumes. In part, the initial spike in signal can be attributed to a pooling of analyte vapor in the absence of helium flow from the source. Time-resolved schlieren imaging of the helium flow during on and off cycles provided insight into gas-flow patterns between the FAPA source and the MS inlet that were correlated with mass-spectral data.

  15. Use of Interrupted Helium Flow in the Analysis of Vapor Samples with Flowing Atmospheric-Pressure Afterglow-Mass Spectrometry.

    PubMed

    Storey, Andrew P; Zeiri, Offer M; Ray, Steven J; Hieftje, Gary M

    2017-02-01

    The flowing atmospheric-pressure afterglow (FAPA) source was used for the mass-spectrometric analysis of vapor samples introduced between the source and mass spectrometer inlet. Through interrupted operation of the plasma-supporting helium flow, helium consumption is greatly reduced and dynamic gas behavior occurs that was characterized by schlieren imaging. Moreover, mass spectra acquired immediately after the onset of helium flow exhibit a signal spike before declining and ultimately reaching a steady level. This initial signal appears to be due to greater interaction of sample vapor with the afterglow of the source when helium flow resumes. In part, the initial spike in signal can be attributed to a pooling of analyte vapor in the absence of helium flow from the source. Time-resolved schlieren imaging of the helium flow during on and off cycles provided insight into gas-flow patterns between the FAPA source and the MS inlet that were correlated with mass-spectral data. Graphical Abstract ᅟ.

  16. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  17. Combined MEG-EEG source localisation in patients with sub-acute sclerosing pan-encephalitis.

    PubMed

    Velmurugan, J; Sinha, Sanjib; Nagappa, Madhu; Mariyappa, N; Bindu, P S; Ravi, G S; Hazra, Nandita; Thennarasu, K; Ravi, V; Taly, A B; Satishchandra, P

    2016-08-01

    To study the genesis and propagation patterns of periodic complexes (PCs) associated with myoclonic jerks in sub-acute sclerosing pan-encephalitis (SSPE) using magnetoencephalography (MEG) and electroencephalography (EEG). Simultaneous recording of MEG (306 channels) and EEG (64 channels) in five patients of SSPE (M:F = 3:2; age 10.8 ± 3.2 years; symptom-duration 6.2 ± 10 months) was carried out using Elekta Neuromag(®) TRIUX™ system. Qualitative analysis of 80-160 PCs per patient was performed. Ten isomorphic classical PCs with significant field topography per patient were analysed at the 'onset' and at 'earliest significant peak' of the burst using discrete and distributed source imaging methods. MEG background was asymmetrical in 2 and slow in 3 patients. Complexes were periodic (3) or quasi-periodic (2), occurring every 4-16 s and varied in morphology among patients. Mean source localization at onset of bursts using discrete and distributed source imaging in magnetic source imaging (MSI) was in thalami and or insula (50 and 50 %, respectively) and in electric source imaging (ESI) was also in thalami and or insula (38 and 46 %, respectively). Mean source localization at the earliest rising phase of peak in MSI was in peri-central gyrus (49 and 42 %) and in ESI it was in frontal cortex (52 and 56 %). Further analysis revealed that PCs were generated in thalami and or insula and thereafter propagated to anterolateral surface of the cortices (viz. sensori-motor cortex and frontal cortex) to same side as that of the onset. This novel MEG-EEG based case series of PCs provides newer insights for understanding the plausible generators of myoclonus in SSPE and patterns of their propagation.

  18. Application of time-resolved shadowgraph imaging and computer analysis to study micrometer-scale response of superfluid helium

    NASA Astrophysics Data System (ADS)

    Sajjadi, Seyed; Buelna, Xavier; Eloranta, Jussi

    2018-01-01

    Application of inexpensive light emitting diodes as backlight sources for time-resolved shadowgraph imaging is demonstrated. The two light sources tested are able to produce light pulse sequences in the nanosecond and microsecond time regimes. After determining their time response characteristics, the diodes were applied to study the gas bubble formation around laser-heated copper nanoparticles in superfluid helium at 1.7 K and to determine the local cavitation bubble dynamics around fast moving metal micro-particles in the liquid. A convolutional neural network algorithm for analyzing the shadowgraph images by a computer is presented and the method is validated against the results from manual image analysis. The second application employed the red-green-blue light emitting diode source that produces light pulse sequences of the individual colors such that three separate shadowgraph frames can be recorded onto the color pixels of a charge-coupled device camera. Such an image sequence can be used to determine the moving object geometry, local velocity, and acceleration/deceleration. These data can be used to calculate, for example, the instantaneous Reynolds number for the liquid flow around the particle. Although specifically demonstrated for superfluid helium, the technique can be used to study the dynamic response of any medium that exhibits spatial variations in the index of refraction.

  19. The Star Blended with the MOA-2008-BLG-310 Source Is Not the Exoplanet Host Star

    NASA Astrophysics Data System (ADS)

    Bhattacharya, A.; Bennett, D. P.; Anderson, J.; Bond, I. A.; Gould, A.; Batista, V.; Beaulieu, J. P.; Fouqué, P.; Marquette, J. B.; Pogge, R.

    2017-08-01

    High-resolution Hubble Space Telescope (HST) image analysis of the MOA-2008-BLG-310 microlens system indicates that the excess flux at the location of the source found in the discovery paper cannot primarily be due to the lens star because it does not match the lens-source relative proper motion, {μ }{rel}, predicted by the microlens models. This excess flux is most likely to be due to an unrelated star that happens to be located in close proximity to the source star. Two epochs of HST observations indicate proper motion for this blend star that is typical of a random bulge star but is not consistent with a companion to the source or lens stars if the flux is dominated by only one star, aside from the lens. We consider models in which the excess flux is due to a combination of an unrelated star and the lens star, and this yields a 95% confidence level upper limit on the lens star brightness of {I}L> 22.44 and {V}L> 23.62. A Bayesian analysis using a standard Galactic model and these magnitude limits yields a host star mass of {M}h={0.21}-0.09+0.21 {M}⊙ and a planet mass of {m}p={23.4}-9.9+23.9 {M}\\oplus at a projected separation of {a}\\perp ={1.12}-0.17+0.16 au. This result illustrates that excess flux in a high-resolution image of a microlens-source system need not be due to the lens. It is important to check that the lens-source relative proper motion is consistent with the microlensing prediction. The high-resolution image analysis techniques developed in this paper can be used to verify the WFIRST exoplanet microlensing survey mass measurements.

  20. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. II. Application to the HRI and First Results

    NASA Astrophysics Data System (ADS)

    Campana, Sergio; Lazzati, Davide; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    The wavelet detection algorithm (WDA) described in the accompanying paper by Lazzati et al. is suited to a fast and efficient analysis of images taken with the High-Resolution Imager (HRI) instrument on board the ROSAT satellite. An extensive testing is carried out on the detection pipeline: HRI fields with different exposure times are simulated and analyzed in the same fashion as the real data. Positions are recovered with errors of a few arcseconds, whereas fluxes are within a factor of 2 from their input values in more than 90% of the cases in the deepest images. Unlike the ``sliding-box'' detection algorithms, the WDA also provides a reliable description of the source extension, allowing for a complete search of, e.g., supernova remnants or clusters of galaxies in the HRI fields. A completeness analysis on simulated fields shows that for the deepest exposures considered (~120 ks) a limiting flux of ~3×10-15 ergs s-1 cm-2 can be reached over the entire field of view. We test the algorithm on real HRI fields selected for their crowding and/or the presence of extended or bright sources (e.g., clusters of galaxies and stars, supernova remnants). We show that our algorithm compares favorably with other X-ray detection algorithms, such as XIMAGE and EXSAS. Analysis with the WDA of the large set of HRI data will allow us to survey ~400 deg2 down to a limiting flux of ~10-13 ergs s-1 cm-2, and ~0.3 deg2 down to ~3×10-15 ergs s-1 cm-2. A complete catalog will result from our analysis, consisting of the Brera Multiscale Wavelet Bright Source Catalog (BMW-BSC), with sources detected with a significance of >~4.5 σ, and the Faint Source Catalog (BMW-FSC), with sources at >~3.5 σ. A conservative estimate based on the extragalactic log N-log S indicates that at least 16,000 sources will be revealed in the complete analysis of the entire HRI data set.

  1. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  2. Sources of Disconnection in Neurocognitive Aging: Cerebral White Matter Integrity, Resting-state Functional Connectivity, and White Matter Hyperintensity Volume

    PubMed Central

    Madden, David J.; Parks, Emily L.; Tallman, Catherine W.; Boylan, Maria A.; Hoagey, David A.; Cocjin, Sally B.; Packard, Lauren E.; Johnson, Micah A.; Chou, Ying-hui; Potter, Guy G.; Chen, Nan-kuei; Siciliano, Rachel E.; Monge, Zachary A.; Honig, Jesse A.; Diaz, Michele T.

    2017-01-01

    Age-related decline in fluid cognition can be characterized as a disconnection among specific brain structures, leading to a decline in functional efficiency. The potential sources of disconnection, however, are unclear. We investigated imaging measures of cerebral white matter integrity, resting-state functional connectivity, and white matter hyperintensity (WMH) volume as mediators of the relation between age and fluid cognition, in 145 healthy, community-dwelling adults 19–79 years of age. At a general level of analysis, with a single composite measure of fluid cognition and single measures of each of the three imaging modalities, age exhibited an independent influence on the cognitive and imaging measures, and the imaging variables did not mediate the age-cognition relation. At a more specific level of analysis, resting-state functional connectivity of sensorimotor networks was a significant mediator of the age-related decline in executive function. These findings suggest that different levels of analysis lead to different models of neurocognitive disconnection, and that resting-state functional connectivity, in particular, may contribute to age-related decline in executive function. PMID:28389085

  3. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  4. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  5. Development and validation of an open source quantification tool for DSC-MRI studies.

    PubMed

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Detection and Imaging of the Crab Nebula with the Nuclear Compton Telescope

    NASA Astrophysics Data System (ADS)

    Bandstra, M. S.; Bellm, E. C.; Boggs, S. E.; Perez-Becker, D.; Zoglauer, A.; Chang, H.-K.; Chiu, J.-L.; Liang, J.-S.; Chang, Y.-H.; Liu, Z.-K.; Hung, W.-C.; Huang, M.-H. A.; Chiang, S. J.; Run, R.-S.; Lin, C.-H.; Amman, M.; Luke, P. N.; Jean, P.; von Ballmoos, P.; Wunderer, C. B.

    2011-09-01

    The Nuclear Compton Telescope (NCT) is a balloon-borne Compton telescope designed for the study of astrophysical sources in the soft gamma-ray regime (200 keV-20 MeV). NCT's 10 high-purity germanium crossed-strip detectors measure the deposited energies and three-dimensional positions of gamma-ray interactions in the sensitive volume, and this information is used to restrict the initial photon to a circle on the sky using the Compton scatter technique. Thus NCT is able to perform spectroscopy, imaging, and polarization analysis on soft gamma-ray sources. NCT is one of the next generation of Compton telescopes—the so-called compact Compton telescopes (CCTs)—which can achieve effective areas comparable to the Imaging Compton Telescope's with an instrument that is a fraction of the size. The Crab Nebula was the primary target for the second flight of the NCT instrument, which occurred on 2009 May 17 and 18 in Fort Sumner, New Mexico. Analysis of 29.3 ks of data from the flight reveals an image of the Crab at a significance of 4σ. This is the first reported detection of an astrophysical source by a CCT.

  7. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  8. A review of multivariate methods in brain imaging data fusion

    NASA Astrophysics Data System (ADS)

    Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.

    2010-03-01

    On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.

  9. A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software

    NASA Technical Reports Server (NTRS)

    Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.

    2016-01-01

    This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.

  10. Cardiac MOLLI T1 mapping at 3.0 T: comparison of patient-adaptive dual-source RF and conventional RF transmission.

    PubMed

    Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M

    2017-06-01

    To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.

  11. DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.

    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less

  12. Mitigating fringing in discrete frequency infrared imaging using time-delayed integration

    PubMed Central

    Ran, Shihao; Berisha, Sebastian; Mankar, Rupali; Shih, Wei-Chuan; Mayerich, David

    2018-01-01

    Infrared (IR) spectroscopic microscopes provide the potential for label-free quantitative molecular imaging of biological samples, which can be used to aid in histology, forensics, and pharmaceutical analysis. Most IR imaging systems use broadband illumination combined with a spectrometer to separate the signal into spectral components. This technique is currently too slow for many biomedical applications such as clinical diagnosis, primarily due to the availability of bright mid-infrared sources and sensitive MCT detectors. There has been a recent push to increase throughput using coherent light sources, such as synchrotron radiation and quantum cascade lasers. While these sources provide a significant increase in intensity, the coherence introduces fringing artifacts in the final image. We demonstrate that applying time-delayed integration in one dimension can dramatically reduce fringing artifacts with minimal alterations to the standard infrared imaging pipeline. The proposed technique also offers the potential for less expensive focal plane array detectors, since linear arrays can be more readily incorporated into the proposed framework. PMID:29552416

  13. Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping

    2013-05-01

    Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.

  14. Survey Plan For Characterization of the Subsurface Underlying the National Aeronautics and Space Administration's Marshall Space Flight Center in Huntsville, Alabama. Volume 1 and 2

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Topic considered include: survey objectives; technologies for non-Invasive imaging of subsurface; cost; data requirements and sources; climatic condition; hydrology and geology; chemicals; magnetometry; electrical(resistivity, potential); optical-style imaging; reflection/refraction seismics; gravitometry; photo-acoustic activation;well drilling and borehole analysis; comparative assessment matrix; ground sensors; choice of the neutron sources; logistic of operations; system requirements; health and safety plans.

  15. Medical Imaging System

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The MD Image System, a true-color image processing system that serves as a diagnostic aid and tool for storage and distribution of images, was developed by Medical Image Management Systems, Huntsville, AL, as a "spinoff from a spinoff." The original spinoff, Geostar 8800, developed by Crystal Image Technologies, Huntsville, incorporates advanced UNIX versions of ELAS (developed by NASA's Earth Resources Laboratory for analysis of Landsat images) for general purpose image processing. The MD Image System is an application of this technology to a medical system that aids in the diagnosis of cancer, and can accept, store and analyze images from other sources such as Magnetic Resonance Imaging.

  16. Quantitative 3D Analysis of Nuclear Morphology and Heterochromatin Organization from Whole-Mount Plant Tissue Using NucleusJ.

    PubMed

    Desset, Sophie; Poulet, Axel; Tatout, Christophe

    2018-01-01

    Image analysis is a classical way to study nuclear organization. While nuclear organization used to be investigated by colorimetric or fluorescent labeling of DNA or specific nuclear compartments, new methods in microscopy imaging now enable qualitative and quantitative analyses of chromatin pattern, and nuclear size and shape. Several procedures have been developed to prepare samples in order to collect 3D images for the analysis of spatial chromatin organization, but only few preserve the positional information of the cell within its tissue context. Here, we describe a whole mount tissue preparation procedure coupled to DNA staining using the PicoGreen ® intercalating agent suitable for image analysis of the nucleus in living and fixed tissues. 3D Image analysis is then performed using NucleusJ, an open source ImageJ plugin, which allows for quantifying variations in nuclear morphology such as nuclear volume, sphericity, elongation, and flatness as well as in heterochromatin content and position in respect to the nuclear periphery.

  17. Interferometric superlocalization of two incoherent optical point sources.

    PubMed

    Nair, Ranjith; Tsang, Mankei

    2016-02-22

    A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

  18. A new dust source map of Central Asia derived from MODIS Terra/Aqua data using dust enhancement techniques

    NASA Astrophysics Data System (ADS)

    Nobakht, Mohamad; Shahgedanova, Maria; White, Kevin

    2017-04-01

    Central Asian deserts are a significant source of dust in the middle latitudes, where economic activity and health of millions of people are affected by dust storms. Detailed knowledge of sources of dust, controls over their activity, seasonality and atmospheric pathways are of crucial importance but to date, these data are limited. This paper presents a detailed database of sources of dust emissions in Central Asia, from western China to the Caspian Sea, obtained from the analysis of the Moderate Resolution Imaging Spectroradiometer (MODIS) data between 2003 and 2012. A dust enhancement algorithm was employed to obtain two composite images per day at 1 km resolution from MODIS Terra/Aqua acquisitions, from which dust point sources (DPS) were detected by visual analysis and recorded in a database together with meteorological variables at each DPS location. Spatial analysis of DPS has revealed several active source regions, including some which were not widely discussed in literature before (e.g. Northern Afghanistan sources, Betpak-Dala region in western Kazakhstan). Investigation of land surface characteristics and meteorological conditions at each source region revealed mechanisms for the formation of dust sources, including post-fire wind erosion (e.g. Lake Balkhash basin) and rapid desertification (e.g. the Aral Sea). Different seasonal patterns of dust emissions were observed as well as inter-annual trends. The most notable feature was an increase in dust activity in the Aral Kum.

  19. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  20. Coherent diffractive imaging of single helium nanodroplets with a high harmonic generation source.

    PubMed

    Rupp, Daniela; Monserud, Nils; Langbehn, Bruno; Sauppe, Mario; Zimmermann, Julian; Ovcharenko, Yevheniy; Möller, Thomas; Frassetto, Fabio; Poletto, Luca; Trabattoni, Andrea; Calegari, Francesca; Nisoli, Mauro; Sander, Katharina; Peltz, Christian; J Vrakking, Marc; Fennel, Thomas; Rouzée, Arnaud

    2017-09-08

    Coherent diffractive imaging of individual free nanoparticles has opened routes for the in situ analysis of their transient structural, optical, and electronic properties. So far, single-shot single-particle diffraction was assumed to be feasible only at extreme ultraviolet and X-ray free-electron lasers, restricting this research field to large-scale facilities. Here we demonstrate single-shot imaging of isolated helium nanodroplets using extreme ultraviolet pulses from a femtosecond-laser-driven high harmonic source. We obtain bright wide-angle scattering patterns, that allow us to uniquely identify hitherto unresolved prolate shapes of superfluid helium droplets. Our results mark the advent of single-shot gas-phase nanoscopy with lab-based short-wavelength pulses and pave the way to ultrafast coherent diffractive imaging with phase-controlled multicolor fields and attosecond pulses.Diffraction imaging studies of free individual nanoparticles have so far been restricted to XUV and X-ray free - electron laser facilities. Here the authors demonstrate the possibility of using table-top XUV laser sources to image prolate shapes of superfluid helium droplets.

  1. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  2. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    PubMed

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  4. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  5. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  6. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.

    PubMed

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-02-11

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  7. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy

    PubMed Central

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-01-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794

  8. Source analysis of alpha rhythm reactivity using LORETA imaging with 64-channel EEG and individual MRI.

    PubMed

    Cuspineda, E R; Machado, C; Virues, T; Martínez-Montes, E; Ojeda, A; Valdés, P A; Bosch, J; Valdes, L

    2009-07-01

    Conventional EEG and quantitative EEG visual stimuli (close-open eyes) reactivity analysis have shown their usefulness in clinical practice; however studies at the level of EEG generators are limited. The focus of the study was visual reactivity of cortical resources in healthy subjects and in a stroke patient. The 64 channel EEG and T1 magnetic resonance imaging (MRI) studies were obtained from 32 healthy subjects and a middle cerebral artery stroke patient. Low Resolution Electromagnetic Tomography (LORETA) was used to estimate EEG sources for both close eyes (CE) vs. open eyes (OE) conditions using individual MRI. The t-test was performed between source spectra of the two conditions. Thresholds for statistically significant t values were estimated by the local false discovery rate (lfdr) method. The Z transform was used to quantify the differences in cortical reactivity between the patient and healthy subjects. Closed-open eyes alpha reactivity sources were found mainly in posterior regions (occipito-parietal zones), extended in some cases to anterior and thalamic regions. Significant cortical reactivity sources were found in frequencies different from alpha (lower t-values). Significant changes at EEG reactivity sources were evident in the damaged brain hemisphere. Reactivity changes were also found in the "healthy" hemisphere when compared with the normal population. In conclusion, our study of brain sources of EEG alpha reactivity provides information that is not evident in the usual topographic analysis.

  9. Classifying bent radio galaxies from a mixture of point-like/extended images with Machine Learning.

    NASA Astrophysics Data System (ADS)

    Bastien, David; Oozeer, Nadeem; Somanah, Radhakrishna

    2017-05-01

    The hypothesis that bent radio sources are supposed to be found in rich, massive galaxy clusters and the avalibility of huge amount of data from radio surveys have fueled our motivation to use Machine Learning (ML) to identify bent radio sources and as such use them as tracers for galaxy clusters. The shapelet analysis allowed us to decompose radio images into 256 features that could be fed into the ML algorithm. Additionally, ideas from the field of neuro-psychology helped us to consider training the machine to identify bent galaxies at different orientations. From our analysis, we found that the Random Forest algorithm was the most effective with an accuracy rate of 92% for a classification of point and extended sources as well as an accuracy of 80% for bent and unbent classification.

  10. High contrast imaging through adaptive transmittance control in the focal plane

    NASA Astrophysics Data System (ADS)

    Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake

    2016-05-01

    High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.

  11. OPTICAL IMAGES AND SOURCE CATALOG OF AKARI NORTH ECLIPTIC POLE WIDE SURVEY FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Yiseul; Im, Myungshin; Lee, Induk

    2010-09-15

    We present the source catalog and the properties of the B-, R-, and I-band images obtained to support the AKARI North Ecliptic Pole Wide (NEP-Wide) survey. The NEP-Wide is an AKARI infrared imaging survey of the north ecliptic pole covering a 5.8 deg{sup 2} area over 2.5-6 {mu}m wavelengths. The optical imaging data were obtained at the Maidanak Observatory in Uzbekistan using the Seoul National University 4k x 4k Camera on the 1.5 m telescope. These images cover 4.9 deg{sup 2} where no deep optical imaging data are available. Our B-, R-, and I-band data reach the depths of {approx}23.4,more » {approx}23.1, and {approx}22.3 mag (AB) at 5{sigma}, respectively. The source catalog contains 96,460 objects in the R band, and the astrometric accuracy is about 0.''15 at 1{sigma} in each R.A. and decl. direction. These photometric data will be useful for many studies including identification of optical counterparts of the infrared sources detected by AKARI, analysis of their spectral energy distributions from optical through infrared, and the selection of interesting objects to understand the obscured galaxy evolution.« less

  12. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  13. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  14. OpenComet: An automated tool for comet assay image analysis

    PubMed Central

    Gyori, Benjamin M.; Venkatachalam, Gireedhar; Thiagarajan, P.S.; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time. PMID:24624335

  15. OpenComet: an automated tool for comet assay image analysis.

    PubMed

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  16. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  17. a Cognitive Approach to Teaching a Graduate-Level Geobia Course

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel A.

    2016-06-01

    Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.

  18. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  19. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  20. Identification and Classification of Infrared Excess Sources in the Spitzer Enhanced Imaging Products (SEIP) Catalog

    NASA Astrophysics Data System (ADS)

    Strasburger, David; Gorjian, Varoujan; Burke, Todd; Childs, Linda; Odden, Caroline; Tambara, Kevin; Abate, Antoinette; Akhtar, Nadir; Beach, Skyler; Bhojwani, Ishaan; Brown, Caden; Dear, AnnaMaria; Dumont, Theodore; Harden, Olivia; Joli-Coeur, Laurent; Nahirny, Rachel; Nakahira, Andie; Nix, Sabine; Orgul, Sarp; Parry, Johnny; Picken, John; Taylor, Isabel; Toner, Emre; Turner, Aspen; Xu, Jessica; Zhu, Emily

    2015-01-01

    The Spitzer Space Telescope's original cryogenic mission imaged roughly 42 million sources, most of which were incidental and never specifically targeted for research. These have now been compiled in the publicly accessible Spitzer Enhanced Imaging Products (SEIP) catalog. The SEIP stores millions of never before examined sources that happened to be in the same field of view as objects specifically selected for study. This project examined the catalog to isolate previously unknown infrared excess (IRXS) candidates. The culling process utilized four steps. First, we considered only those objects with signal to noise ratios of at least 10 to 1 in the following five wavelengths: 3.6, 4.5, 5.8, 8 and 24 microns, which narrowed the source list to about one million. Second, objects were removed from highly studied regions, such as the galactic plane and previously conducted infrared surveys. This further reduced the population of sources to 283,758. Third, the remaining sources were plotted using a [3.6]-[4.5] vs. [8]-[24] color-color diagram to isolate IRXS candidates. Fourth, multiple images of sixty-three outlier points from the extrema of the color-color diagram were examined to verify that the sources had been cross matched correctly and to exclude any candidate sources that may have been compromised due to image artifacts or field crowding. The team will ultimately provide statistics for the prevalence of IRXS sources in the SEIP catalog and provide analysis of those extreme outliers from the main locus of points. This research was made possible through the NASA/IPAC Teacher Archive Research Program (NITARP) and was funded by NASA Astrophysics Data Program.

  1. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  2. Episodes of floods in Mangala Valles, Mars, from the analysis of HRSC, MOC and THEMIS images

    USGS Publications Warehouse

    Basilevsky, A.T.; Neukum, G.; Werner, S.C.; Dumke, A.; Van Gasselt, S.; Kneissl, T.; Zuschneid, W.; Rommel, D.; Wendt, L.; Chapman, M.; Head, J.W.; Greeley, R.

    2009-01-01

    The Mangala Valles is a 900-km long outflow channel system in the highlands adjacent to the south-eastern flank of the Tharsis bulge. This work was intended to answer the following two questions unresolved in previous studies: (1) Was there only one source of water (Mangala Fossa at the valley head which is one of the Medusae Fossae troughs or graben) or were other sources also involved in the valley-carving water supply, and (2) Was there only one episode of flooding (maybe with phases) or were there several episodes significantly separated in time. The geologic analysis of HRSC image 0286 and mapping supported by analysis of MOC and THEMIS images show that Mangala Valles was carved by water released from several sources. The major source was Mangala Fossa, which probably formed in response to magmatic dike intrusion. The graben cracked the cryosphere and permitted the release of groundwater held under hydrostatic pressure. This major source was augmented by a few smaller-scale sources at localities in (1) two mapped heads of magmatic dikes, (2) heads of two clusters of sinuous channels, and (3) probably several large knob terrain locals. The analysis of results of crater counts at more than 60 localities showed that the first episode of formation of Mangala Valles occurred ???3.5 Ga ago and was followed by three more episodes, one occurred ???1 Ga ago, another one ???0.5 Ga ago, and the last one ???0.2 Ga ago. East of the mapped area there are extended and thick lava flows whose source may be the eastern continuation of the Mangala source graben. Crater counts in 10 localities on these lava flows correlate with those taken on the Mangala valley elements supporting the idea that the valley head graben was caused by dike intrusions. Our observations suggest that the waning stage of the latest flooding episode (???0.2 Ga ago) led to the formation at the valley head of meander-like features sharing some characteristics with meanders of terrestrial rivers. If this analogy is correct this could suggest a short episode of global warming in Late Amazonian time. ?? 2008 Elsevier Ltd. All rights reserved.

  3. Radiometric analysis of photographic data by the effective exposure method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, B J

    1972-04-01

    The effective exposure method provides for radiometric analysis of photographic data. A three-dimensional model, where density is a function of energy and wavelength, is postulated to represent the film response function. Calibration exposures serve to eliminate the other factors which affect image density. The effective exposure causing an image can be determined by comparing the image density with that of a calibration exposure. If the relative spectral distribution of the source is known, irradiance and/or radiance can be unfolded from the effective exposure expression.

  4. Quantitative Immunofluorescence Analysis of Nucleolus-Associated Chromatin.

    PubMed

    Dillinger, Stefan; Németh, Attila

    2016-01-01

    The nuclear distribution of eu- and heterochromatin is nonrandom, heterogeneous, and dynamic, which is mirrored by specific spatiotemporal arrangements of histone posttranslational modifications (PTMs). Here we describe a semiautomated method for the analysis of histone PTM localization patterns within the mammalian nucleus using confocal laser scanning microscope images of fixed, immunofluorescence stained cells as data source. The ImageJ-based process includes the segmentation of the nucleus, furthermore measurements of total fluorescence intensities, the heterogeneity of the staining, and the frequency of the brightest pixels in the region of interest (ROI). In the presented image analysis pipeline, the perinucleolar chromatin is selected as primary ROI, and the nuclear periphery as secondary ROI.

  5. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  6. Non-invasive, Contrast-enhanced Spectral Imaging of Breast Cancer Signatures in Preclinical Animal Models In vivo

    PubMed Central

    Ramanujan, V Krishnan; Ren, Songyang; Park, Sangyong; Farkas, Daniel L

    2011-01-01

    We report here a non-invasive multispectral imaging platform for monitoring spectral reflectance and fluorescence images from primary breast carcinoma and metastatic lymph nodes in preclinical rat model in vivo. The system is built around a monochromator light source and an acousto-optic tunable filter (AOTF) for spectral selection. Quantitative analysis of the measured reflectance profiles in the presence of a widely-used lymphazurin dye clearly demonstrates the capability of the proposed imaging platform to detect tumor-associated spectral signatures in the primary tumors as well as metastatic lymphatics. Tumor-associated changes in vascular oxygenation and interstitial fluid pressure are reasoned to be the physiological sources of the measured reflectance profiles. We also discuss the translational potential of our imaging platform in intra-operative clinical setting. PMID:21572915

  7. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  8. Zoomed MRI Guided by Combined EEG/MEG Source Analysis: A Multimodal Approach for Optimizing Presurgical Epilepsy Work-up and its Application in a Multi-focal Epilepsy Patient Case Study.

    PubMed

    Aydin, Ü; Rampp, S; Wollbrink, A; Kugel, H; Cho, J -H; Knösche, T R; Grova, C; Wellmer, J; Wolters, C H

    2017-07-01

    In recent years, the use of source analysis based on electroencephalography (EEG) and magnetoencephalography (MEG) has gained considerable attention in presurgical epilepsy diagnosis. However, in many cases the source analysis alone is not used to tailor surgery unless the findings are confirmed by lesions, such as, e.g., cortical malformations in MRI. For many patients, the histology of tissue resected from MRI negative epilepsy shows small lesions, which indicates the need for more sensitive MR sequences. In this paper, we describe a technique to maximize the synergy between combined EEG/MEG (EMEG) source analysis and high resolution MRI. The procedure has three main steps: (1) construction of a detailed and calibrated finite element head model that considers the variation of individual skull conductivities and white matter anisotropy, (2) EMEG source analysis performed on averaged interictal epileptic discharges (IED), (3) high resolution (0.5 mm) zoomed MR imaging, limited to small areas centered at the EMEG source locations. The proposed new diagnosis procedure was then applied in a particularly challenging case of an epilepsy patient: EMEG analysis at the peak of the IED coincided with a right frontal focal cortical dysplasia (FCD), which had been detected at standard 1 mm resolution MRI. Of higher interest, zoomed MR imaging (applying parallel transmission, 'ZOOMit') guided by EMEG at the spike onset revealed a second, fairly subtle, FCD in the left fronto-central region. The evaluation revealed that this second FCD, which had not been detectable with standard 1 mm resolution, was the trigger of the seizures.

  9. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  10. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  11. A novel method for automated tracking and quantification of adult zebrafish behaviour during anxiety.

    PubMed

    Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh

    2016-09-15

    Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Comparison of two freely available software packages for mass spectrometry imaging data analysis using brains from morphine addicted rats.

    PubMed

    Bodzon-Kulakowska, Anna; Marszalek-Grabska, Marta; Antolak, Anna; Drabik, Anna; Kotlinska, Jolanta H; Suder, Piotr

    Data analysis from mass spectrometry imaging (MSI) imaging experiments is a very complex task. Most of the software packages devoted to this purpose are designed by the mass spectrometer manufacturers and, thus, are not freely available. Laboratories developing their own MS-imaging sources usually do not have access to the commercial software, and they must rely on the freely available programs. The most recognized ones are BioMap, developed by Novartis under Interactive Data Language (IDL), and Datacube, developed by the Dutch Foundation for Fundamental Research of Matter (FOM-Amolf). These two systems were used here for the analysis of images received from rat brain tissues subjected to morphine influence and their capabilities were compared in terms of ease of use and the quality of obtained results.

  13. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  14. Nanoscale deformation analysis with high-resolution transmission electron microscopy and digital image correlation

    DOE PAGES

    Wang, Xueju; Pan, Zhipeng; Fan, Feifei; ...

    2015-09-10

    We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for themore » analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. As a result, the DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.« less

  15. Fish-Eye Observing with Phased Array Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.

    The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.

  16. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  17. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  18. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  19. Precise Absolute Astrometry from the VLBA Imaging and Polarimetry Survey at 5 GHz

    NASA Technical Reports Server (NTRS)

    Petrov, L.; Taylor, G. B.

    2011-01-01

    We present accurate positions for 857 sources derived from the astrometric analysis of 16 eleven-hour experiments from the Very Long Baseline Array imaging and polarimetry survey at 5 GHz (VIPS). Among the observed sources, positions of 430 objects were not previously determined at milliarcsecond-level accuracy. For 95% of the sources the uncertainty of their positions ranges from 0.3 to 0.9 mas, with a median value of 0.5 mas. This estimate of accuracy is substantiated by the comparison of positions of 386 sources that were previously observed in astrometric programs simultaneously at 2.3/8.6 GHz. Surprisingly, the ionosphere contribution to group delay was adequately modeled with the use of the total electron content maps derived from GPS observations and only marginally affected estimates of source coordinates.

  20. Registration and rectification needs of geology

    NASA Technical Reports Server (NTRS)

    Chavez, P. S., Jr.

    1982-01-01

    Geologic applications of remotely sensed imaging encompass five areas of interest. The five areas include: (1) enhancement and analysis of individual images; (2) work with small area mosaics of imagery which have been map projection rectified to individual quadrangles; (3) development of large area mosaics of multiple images for several counties or states; (4) registration of multitemporal images; and (5) data integration from several sensors and map sources. Examples for each of these types of applications are summarized.

  1. Joint Blind Source Separation by Multi-set Canonical Correlation Analysis

    PubMed Central

    Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D

    2009-01-01

    In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319

  2. Live imaging of rat embryos with Doppler swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Larina, Irina V.; Furushima, Kenryo; Dickinson, Mary E.; Behringer, Richard R.; Larin, Kirill V.

    2009-09-01

    The rat has long been considered an excellent system to study mammalian embryonic cardiovascular physiology, but has lacked the extensive genetic tools available in the mouse to be able to create single gene mutations. However, the recent establishment of rat embryonic stem cell lines facilitates the generation of new models in the rat embryo to link changes in physiology with altered gene function to define the underlying mechanisms behind congenital cardiovascular birth defects. Along with the ability to create new rat genotypes there is a strong need for tools to analyze phenotypes with high spatial and temporal resolution. Doppler OCT has been previously used for 3-D structural analysis and blood flow imaging in other model species. We use Doppler swept-source OCT for live imaging of early postimplantation rat embryos. Structural imaging is used for 3-D reconstruction of embryo morphology and dynamic imaging of the beating heart and vessels, while Doppler-mode imaging is used to visualize blood flow. We demonstrate that Doppler swept-source OCT can provide essential information about the dynamics of early rat embryos and serve as a basis for a wide range of studies on functional evaluation of rat embryo physiology.

  3. Live imaging of rat embryos with Doppler swept-source optical coherence tomography

    PubMed Central

    Larina, Irina V.; Furushima, Kenryo; Dickinson, Mary E.; Behringer, Richard R.; Larin, Kirill V.

    2009-01-01

    The rat has long been considered an excellent system to study mammalian embryonic cardiovascular physiology, but has lacked the extensive genetic tools available in the mouse to be able to create single gene mutations. However, the recent establishment of rat embryonic stem cell lines facilitates the generation of new models in the rat embryo to link changes in physiology with altered gene function to define the underlying mechanisms behind congenital cardiovascular birth defects. Along with the ability to create new rat genotypes there is a strong need for tools to analyze phenotypes with high spatial and temporal resolution. Doppler OCT has been previously used for 3-D structural analysis and blood flow imaging in other model species. We use Doppler swept-source OCT for live imaging of early postimplantation rat embryos. Structural imaging is used for 3-D reconstruction of embryo morphology and dynamic imaging of the beating heart and vessels, while Doppler-mode imaging is used to visualize blood flow. We demonstrate that Doppler swept-source OCT can provide essential information about the dynamics of early rat embryos and serve as a basis for a wide range of studies on functional evaluation of rat embryo physiology. PMID:19895102

  4. Radar, Thermal Infrared, and Panchromatic Image Collection and Analysis. Multi-Source Image Analysis.

    DTIC Science & Technology

    1980-12-01

    92626. I DECEMBER 1980 APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED Prepared for U.S. ARMY CORPS OF ENGINEERS ENGINEER TOPOGRAPHIC LABORATORIES J...CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE December 1980 U. S. Army Engineer Topographic Laboratories 13. NUMBER OF PAGES Fort Belvoir...infrared and panchromatic imagery was collected by the Oregon Army National Guard at the Corvallis, Oregon, test site on 13 and 19 August 1980 . Ground f

  5. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  6. Evaluating the effect of increased pitch, iterative reconstruction and dual source CT on dose reduction and image quality.

    PubMed

    Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier

    2018-06-14

    To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.

  7. Application of Multi-Source Remote Sensing Image in Yunnan Province Grassland Resources Investigation

    NASA Astrophysics Data System (ADS)

    Li, J.; Wen, G.; Li, D.

    2018-04-01

    Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.

  8. Crustal deformation at long Valley Caldera, eastern California, 1992-1996 inferred from satellite radar interferometry

    USGS Publications Warehouse

    Thatcher, W.; Massonnet, D.

    1997-01-01

    Satellite radar interferometric images of Long Valley caldera show a pattern of surface deformation that resembles that expected from analysis of an extensive suite of ground-based geodetic data. Images from 2 and 4 year intervals respectively, are consistent with uniform movement rates determined from leveling surveys. Synthetic interferograms generated from ellipsoidal-inclusion source models based on inversion of the ground-based data show generally good agreement with the observed images. Two interferograms show evidence for a magmatic source southwest of the caldera in a region not covered by ground measurements. Poorer image quality in the 4 year interferogram indicates that temporal decorrelation of surface radar reflectors is progressively degrading the fringe pattern in the Long Valley region. Copyright 1997 by the American Geophysical Union.

  9. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  11. The Image Data Resource: A Bioimage Data Integration and Publication Platform.

    PubMed

    Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R

    2017-08-01

    Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.

  12. Chemical investigation of three plutonium–beryllium neutron sources

    DOE PAGES

    Byerly, Benjamin; Kuhn, Kevin; Colletti, Lisa; ...

    2017-02-03

    Thorough physical and chemical characterization of plutonium–beryllium (PuBe) neutron sources is an important capability with applications ranging from material accountancy to nuclear forensics. Furthermore, characterization of PuBe sources is not trivial owing to range of existing source designs and the need for adequate infrastructure to deal with radiation and protect the analyst. Our study demonstrates a method for characterization of three PuBe sources that includes physical inspection and imaging followed by controlled disassembly and destructive analysis.

  13. Chemical investigation of three plutonium–beryllium neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byerly, Benjamin; Kuhn, Kevin; Colletti, Lisa

    Thorough physical and chemical characterization of plutonium–beryllium (PuBe) neutron sources is an important capability with applications ranging from material accountancy to nuclear forensics. Furthermore, characterization of PuBe sources is not trivial owing to range of existing source designs and the need for adequate infrastructure to deal with radiation and protect the analyst. Our study demonstrates a method for characterization of three PuBe sources that includes physical inspection and imaging followed by controlled disassembly and destructive analysis.

  14. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  15. AKARI North Ecliptic Pole Deep Survey. Revision of the catalogue via a new image analysis

    NASA Astrophysics Data System (ADS)

    Murata, K.; Matsuhara, H.; Wada, T.; Arimatsu, K.; Oi, N.; Takagi, T.; Oyabu, S.; Goto, T.; Ohyama, Y.; Malkan, M.; Pearson, C.; Małek, K.; Solarz, A.

    2013-11-01

    Context. We present the revised near- to mid-infrared catalogue of the AKARI North Ecliptic Pole deep survey. The survey has the unique advantage of continuous filter coverage from 2 to 24 μm over nine photometric bands, but the initial version of the survey catalogue leaves room for improvement in the image analysis stage; the original images are strongly contaminated by the behaviour of the detector and the optical system. Aims: The purpose of this study is to devise new image analysis methods and to improve the detection limit and reliability of the source extraction. Methods: We removed the scattered light and stray light from the Earth limb, and corrected for artificial patterns in the images by creating appropriate templates. We also removed any artificial sources due to bright sources by using their properties or masked them out visually. In addition, for the mid-infrared source extraction, we created detection images by stacking all six bands. This reduced the sky noise and enabled us to detect fainter sources more reliably. For the near-infrared source catalogue, we considered only objects with counterparts from ground-based catalogues to avoid fake sources. For our ground-based catalogues, we used catalogues based on the CFHT/MegaCam z' band, CFHT/WIRCam Ks band and Subaru/Scam z' band. Objects with multiple counterparts were all listed in the catalogue with a merged flag for the AKARI flux. Results: The detection limits of all mid-infrared bands were improved by ~20%, and the total number of detected objects was increased by ~2000 compared with the previous version of the catalogue; it now has 9560 objects. The 5σ detection limits in our catalogue are 11, 9, 10, 30, 34, 57, 87, 93, and 256 μJy in the N2, N3, N4, S7, S9W, S11, L15, L18W, and L24 bands, respectively. The astrometric accuracies of these band detections are 0.48, 0.52, 0.55, 0.99, 0.95, 1.1, 1.2, 1.3, and 1.6 arcsec, respectively. The false-detection rate of all nine bands was decreased to less than 0.3%. In total, 27 770 objects are listed in the catalogue, 11 349 of which have mid-infrared fluxes. The catalogue is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/559/A132 or at the ISAS/JAXA observers page, http://www.ir.isas.jaxa.jp/ASTRO-F/Observation/

  16. LesionTracker: Extensible Open-Source Zero-Footprint Web Viewer for Cancer Imaging Research and Clinical Trials.

    PubMed

    Urban, Trinity; Ziegler, Erik; Lewis, Rob; Hafey, Chris; Sadow, Cheryl; Van den Abbeele, Annick D; Harris, Gordon J

    2017-11-01

    Oncology clinical trials have become increasingly dependent upon image-based surrogate endpoints for determining patient eligibility and treatment efficacy. As therapeutics have evolved and multiplied in number, the tumor metrics criteria used to characterize therapeutic response have become progressively more varied and complex. The growing intricacies of image-based response evaluation, together with rising expectations for rapid and consistent results reporting, make it difficult for site radiologists to adequately address local and multicenter imaging demands. These challenges demonstrate the need for advanced cancer imaging informatics tools that can help ensure protocol-compliant image evaluation while simultaneously promoting reviewer efficiency. LesionTracker is a quantitative imaging package optimized for oncology clinical trial workflows. The goal of the project is to create an open source zero-footprint viewer for image analysis that is designed to be extensible as well as capable of being integrated into third-party systems for advanced imaging tools and clinical trials informatics platforms. Cancer Res; 77(21); e119-22. ©2017 AACR . ©2017 American Association for Cancer Research.

  17. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  18. Optical tolerances for the PICTURE-C mission: error budget for electric field conjugation, beam walk, surface scatter, and polarization aberration

    NASA Astrophysics Data System (ADS)

    Mendillo, Christopher B.; Howe, Glenn A.; Hewawasam, Kuravi; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya

    2017-09-01

    The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. Four leakage sources owing to the optical fabrication tolerances and optical coatings are: electric field conjugation (EFC) residuals, beam walk on the secondary and tertiary mirrors, optical surface scattering, and polarization aberration. Simulations and analysis of these four leakage sources for the PICTUREC optical design are presented here.

  19. Connecting Swath Satellite Data With Imagery in Mapping Applications

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.

    2016-12-01

    Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.

  20. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  1. System and Method for Scan Range Gating

    NASA Technical Reports Server (NTRS)

    Lindemann, Scott (Inventor); Zuk, David M. (Inventor)

    2017-01-01

    A system for scanning light to define a range gated signal includes a pulsed coherent light source that directs light into the atmosphere, a light gathering instrument that receives the light modified by atmospheric backscatter and transfers the light onto an image plane, a scanner that scans collimated light from the image plane to form a range gated signal from the light modified by atmospheric backscatter, a control circuit that coordinates timing of a scan rate of the scanner and a pulse rate of the pulsed coherent light source so that the range gated signal is formed according to a desired range gate, an optical device onto which an image of the range gated signal is scanned, and an interferometer to which the image of the range gated signal is directed by the optical device. The interferometer is configured to modify the image according to a desired analysis.

  2. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  3. En face swept-source optical coherence tomographic analysis of X-linked juvenile retinoschisis.

    PubMed

    Ono, Shinji; Takahashi, Atsushi; Mase, Tomoko; Nagaoka, Taiji; Yoshida, Akitoshi

    2016-07-01

    To clarify the area of retinoschisis by X-linked juvenile retinoschisis (XLRS) using swept-source optical coherence tomography (SS-OCT) en face images. We report two cases of XLRS in the same family. The patients presented with bilateral blurred vision. The posterior segment examination showed a spoked-wheel pattern in the macula. SS-OCT cross-sectional images revealed widespread retinal splitting at the level of the inner nuclear layer bilaterally. We diagnosed XLRS. To evaluate the area of retinoschisis, we obtained en face SS-OCT images, which clearly visualized the area of retinoschisis seen as a sunflower-like structure in the macula. We report the findings on en face SS-OCT images from patients with XLRS. The en face images using SS-OCT showed the precise area of retinoschisis compared with the SS-OCT thickness map and are useful for managing patients with XLRS.

  4. Analysis of Magnetic Resonance Image Signal Fluctuations Acquired During MR-Guided Radiotherapy

    PubMed Central

    Breto, Adrian L; Padgett, Kyle R; Ford, John C; Kwon, Deukwoo; Chang, Channing; Fuss, Martin; Mellon, Eric A

    2018-01-01

    Magnetic resonance-guided radiotherapy (MRgRT) is a new and evolving treatment modality that allows unprecedented visualization of the tumor and surrounding anatomy. MRgRT includes daily 3D magnetic resonance imaging (MRI) for setup and rapidly repeated near real-time MRI scans during treatment for target tracking. One of the more exciting potential benefits of MRgRT is the ability to analyze serial MRIs to monitor treatment response or predict outcomes. A typical radiation treatment (RT) over the span of 10-15 minutes on the MRIdian system (ViewRay, Cleveland, OH) yields thousands of “cine” images, each acquired in 250 ms. This unique data allows for a glimpse in image intensity changes during RT delivery. In this report, we analyze cine images from a single fraction RT of a glioblastoma patient on the ViewRay platform in order to characterize the dynamic signal changes occurring during RT therapy. The individual frames in the cines were saved into DICOM format and read into an MIM image analysis platform (MIM Software, Cleveland, OH) as a time series. The three possible states of the three Cobalt-60 radiation sources—OFF, READY, and ON—were also recorded. An in-house Java plugin for MIM was created in order to perform principal component analysis (PCA) on each of the datasets. The analysis resulted in first PC, related to monotonous signal increase over the course of the treatment fraction. We found several distortion patterns in the data that we postulate result from the perturbation of the magnetic field due to the moving metal parts in the platform while treatment was being administered. The largest variations were detected when all Cobalt-60 sources were OFF. During this phase of the treatment, the gantry and multi-leaf collimators (MLCs) are moving. Conversely, when all Cobalt-60 sources were in the ON position, the image signal fluctuations were minimal, relating to very little mechanical motion. At this phase, the gantry, the MLCs, and sources are fixed in their positions. These findings were confirmed in a study with the daily quality assurance (QA) phantom. While the identified variations were not related to physiological processes, our findings confirm the sensitivity of the developed approach to identify very small fluctuations. Relating these variations to the physical changes that occur during treatment shows the methodical ability of the technique to uncover their underlying sources. PMID:29850380

  5. Terrestrial Myriametric Radio Burst Observed by IMAGE and Geotail Satellites

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Hashimoto, KoZo; Kojima, Hirotsugu; Boardson, Scott A.; Garcia, Leonard N.; Matsumoto, Hiroshi; Green, James L.; Reinisch, Bodo W.

    2013-01-01

    We report the simultaneous detection of a terrestrial myriametric radio burst (TMRB) by IMAGE and Geotail on 19 August 2001. The TMRB was confined in time (0830-1006 UT) and frequency (12-50kHz). Comparisons with all known nonthermal myriametric radiation components reveal that the TMRB might be a distinct radiation with a source that is unrelated to the previously known radiation. Considerations of beaming from spin-modulation analysis and observing satellite and source locations suggest that the TMRB may have a fan beamlike radiation pattern emitted by a discrete, dayside source located along the poleward edge of magnetospheric cusp field lines. TMRB responsiveness to IMF Bz and By orientations suggests that a possible source of the TMRB could be due to dayside magnetic reconnection instigated by northward interplanetary field condition.

  6. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  7. Affective attitudes to face images associated with intracerebral EEG source location before face viewing.

    PubMed

    Pizzagalli, D; Koenig, T; Regard, M; Lehmann, D

    1999-01-01

    We investigated whether different, personality-related affective attitudes are associated with different brain electric field (EEG) sources before any emotional challenge (stimulus exposure). A 27-channel EEG was recorded in 15 subjects during eyes-closed resting. After recording, subjects rated 32 images of human faces for affective appeal. The subjects in the first (i.e., most negative) and fourth (i.e., most positive) quartile of general affective attitude were further analyzed. The EEG data (mean=25+/-4. 8 s/subject) were subjected to frequency-domain model dipole source analysis (FFT-Dipole-Approximation), resulting in 3-dimensional intracerebral source locations and strengths for the delta-theta, alpha, and beta EEG frequency band, and for the full range (1.5-30 Hz) band. Subjects with negative attitude (compared to those with positive attitude) showed the following source locations: more inferior for all frequency bands, more anterior for the delta-theta band, more posterior and more right for the alpha, beta and 1.5-30 Hz bands. One year later, the subjects were asked to rate the face images again. The rating scores for the same face images were highly correlated for all subjects, and original and retest affective mean attitude was highly correlated across subjects. The present results show that subjects with different affective attitudes to face images had different active, cerebral, neural populations in a task-free condition prior to viewing the images. We conclude that the brain functional state which implements affective attitude towards face images as a personality feature exists without elicitors, as a continuously present, dynamic feature of brain functioning. Copyright 1999 Elsevier Science B.V.

  8. Searching for Wolf-Rayet Stars Beyond the Local Group

    NASA Astrophysics Data System (ADS)

    Bibby, J. L.; Shara, M. M.; Crowther, P. A.; Moffat, A. F. J.

    2012-12-01

    We present preliminary results from our HST/WFC3 F469N narrow-band imaging of the nearby star-forming galaxy M101 in which we search for Wolf-Rayet (WR) stars, possible progenitors of Type Ibc core-collapse supernovae (ccSNe). From analysis of the central pointing of M101 we identify ˜1000 WR candidates from photometric analysis and estimate ˜ 450 using the “blinking” method. From analysis of a sample region we find that 35% of our WR candidates would not be detected in ground-based surveys and 40% of sources are not detected in the HST F435W images, highlighting the importance of high spatial resolution narrow-band imaging.

  9. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases

    PubMed Central

    Janowczyk, Andrew; Madabhushi, Anant

    2016-01-01

    Background: Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific “handcrafted” features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. Aims: This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Results: Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images). Conclusion: This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data. PMID:27563488

  10. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases.

    PubMed

    Janowczyk, Andrew; Madabhushi, Anant

    2016-01-01

    Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images). This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data.

  11. The Importance of Particle Induced X-Ray Emission (PIXE) Analysis and Imaging to the Search for Life on the Ocean Worlds

    NASA Technical Reports Server (NTRS)

    Blake, D. F.; Sarrazin, P.; Thompson, Kathleen

    2017-01-01

    The MapX imaging X-ray spectrometer is described and Monte Carlo modeling is used to show the efficacy of 244-Cm radioisotope sources in detecting and quantifying the biogenic elements in ices on Ocean Worlds such as Europa.

  12. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    NASA Astrophysics Data System (ADS)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  13. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  14. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  15. Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging.

    PubMed

    Chaudhari, Abhijit J; Darvas, Felix; Bading, James R; Moats, Rex A; Conti, Peter S; Smith, Desmond J; Cherry, Simon R; Leahy, Richard M

    2005-12-07

    For bioluminescence imaging studies in small animals, it is important to be able to accurately localize the three-dimensional (3D) distribution of the underlying bioluminescent source. The spectrum of light produced by the source that escapes the subject varies with the depth of the emission source because of the wavelength-dependence of the optical properties of tissue. Consequently, multispectral or hyperspectral data acquisition should help in the 3D localization of deep sources. In this paper, we describe a framework for fully 3D bioluminescence tomographic image acquisition and reconstruction that exploits spectral information. We describe regularized tomographic reconstruction techniques that use semi-infinite slab or FEM-based diffusion approximations of photon transport through turbid media. Singular value decomposition analysis was used for data dimensionality reduction and to illustrate the advantage of using hyperspectral rather than achromatic data. Simulation studies in an atlas-mouse geometry indicated that sub-millimeter resolution may be attainable given accurate knowledge of the optical properties of the animal. A fixed arrangement of mirrors and a single CCD camera were used for simultaneous acquisition of multispectral imaging data over most of the surface of the animal. Phantom studies conducted using this system demonstrated our ability to accurately localize deep point-like sources and show that a resolution of 1.5 to 2.2 mm for depths up to 6 mm can be achieved. We also include an in vivo study of a mouse with a brain tumour expressing firefly luciferase. Co-registration of the reconstructed 3D bioluminescent image with magnetic resonance images indicated good anatomical localization of the tumour.

  16. Quantitative EEG and low resolution electromagnetic tomography (LORETA) imaging of patients with persistent auditory hallucinations.

    PubMed

    Lee, Seung-Hwan; Wynn, Jonathan K; Green, Michael F; Kim, Hyun; Lee, Kang-Joon; Nam, Min; Park, Joong-Kyu; Chung, Young-Cho

    2006-04-01

    Electrophysiological studies have demonstrated gamma and beta frequency oscillations in response to auditory stimuli. The purpose of this study was to test whether auditory hallucinations (AH) in schizophrenia patients reflect abnormalities in gamma and beta frequency oscillations and to investigate source generators of these abnormalities. This theory was tested using quantitative electroencephalography (qEEG) and low-resolution electromagnetic tomography (LORETA) source imaging. Twenty-five schizophrenia patients with treatment refractory AH, lasting for at least 2 years, and 23 schizophrenia patients with non-AH (N-AH) in the past 2 years were recruited for the study. Spectral analysis of the qEEG and source imaging of frequency bands of artifact-free 30 s epochs were examined during rest. AH patients showed significantly increased beta 1 and beta 2 frequency amplitude compared with N-AH patients. Gamma and beta (2 and 3) frequencies were significantly correlated in AH but not in N-AH patients. Source imaging revealed significantly increased beta (1 and 2) activity in the left inferior parietal lobule and the left medial frontal gyrus in AH versus N-AH patients. These results imply that AH is reflecting increased beta frequency oscillations with neural generators localized in speech-related areas.

  17. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  18. Measurement and simulation for a complementary imaging with the neutron and X-ray beams

    NASA Astrophysics Data System (ADS)

    Hara, Kaoru Y.; Sato, Hirotaka; Kamiyama, Takashi; Shinohara, Takenao

    2017-09-01

    By using a composite source system, we measured radiographs of the thermal neutron and keV X-ray in the 45-MeV electron linear accelerator facility at Hokkaido University. The source system provides the alternative beam of neutron and X-ray by switching the production target onto the electron beam axis. In the measurement to demonstrate a complementary imaging, the detector based on a vacuum-tube type neutron color image intensifier was applied to the both beams for dual-purpose. On the other hand, for reducing background in a neutron transmission spectrum, test measurements using a gadolinium-type neutron grid were performed with a cold neutron source at Hokkaido University. In addition, the simulations of the neutron and X-ray transmissions for various substances were performed using the PHITS code. A data analysis procedure for estimating the substance of sample was investigated through the simulations.

  19. CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software

    NASA Astrophysics Data System (ADS)

    Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team

    2018-01-01

    CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.

  20. The spatial coherence function in scanning transmission electron microscopy and spectroscopy.

    PubMed

    Nguyen, D T; Findlay, S D; Etheridge, J

    2014-11-01

    We investigate the implications of the form of the spatial coherence function, also referred to as the effective source distribution, for quantitative analysis in scanning transmission electron microscopy, and in particular for interpreting the spatial origin of imaging and spectroscopy signals. These questions are explored using three different source distribution models applied to a GaAs crystal case study. The shape of the effective source distribution was found to have a strong influence not only on the scanning transmission electron microscopy (STEM) image contrast, but also on the distribution of the scattered electron wavefield and hence on the spatial origin of the detected electron intensities. The implications this has for measuring structure, composition and bonding at atomic resolution via annular dark field, X-ray and electron energy loss STEM imaging are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Analysis of monochromatic and quasi-monochromatic X-ray sources in imaging and therapy

    NASA Astrophysics Data System (ADS)

    Westphal, Maximillian; Lim, Sara; Nahar, Sultana; Orban, Christopher; Pradhan, Anil

    2017-04-01

    We studied biomedical imaging and therapeutic applications of recently developed quasi-monochromatic and monochromatic X-ray sources. Using the Monte Carlo code GEANT4, we found that the quasi-monochromatic 65 keV Gaussian X-ray spectrum created by inverse Compton scattering with relatavistic electron beams were capable of producing better image contrast with less radiation compared to conventional 120 kV broadband CT scans. We also explored possible experimental detection of theoretically predicted K α resonance fluorescence in high-Z elements using the European Synchrotron Research Facility with a tungsten (Z = 74) target. In addition, we studied a newly developed quasi-monochromatic source generated by converting broadband X-rays to monochromatic K α and β X-rays with a zirconium target (Z = 40). We will further study how these K α and K β dominated spectra can be implemented in conjunction with nanoparticles for targeted therapy. Acknowledgement: Ohio Supercomputer Center, Columbus, OH.

  2. Dental non-linear image registration and collection method with 3D reconstruction and change detection

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Fagan, Dean; Lemieux, George

    2017-03-01

    The capability of a software algorithm to automatically align same-patient dental bitewing and panoramic x-rays over time is complicated by differences in collection perspectives. We successfully used image correlation with an affine transform for each pixel to discover common image borders, followed by a non-linear homography perspective adjustment to closely align the images. However, significant improvements in image registration could be realized if images were collected from the same perspective, thus facilitating change analysis. The perspective differences due to current dental image collection devices are so significant that straightforward change analysis is not possible. To address this, a new custom dental tray could be used to provide the standard reference needed for consistent positioning of a patient's mouth. Similar to sports mouth guards, the dental tray could be fabricated in standard sizes from plastic and use integrated electronics that have been miniaturized. In addition, the x-ray source needs to be consistently positioned in order to collect images with similar angles and scales. Solving this pose correction is similar to solving for collection angle in aerial imagery for change detection. A standard collection system would provide a method for consistent source positioning using real-time sensor position feedback from a digital x-ray image reference. Automated, robotic sensor positioning could replace manual adjustments. Given an image set from a standard collection, a disparity map between images can be created using parallax from overlapping viewpoints to enable change detection. This perspective data can be rectified and used to create a three-dimensional dental model reconstruction.

  3. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  4. Source-to-Sink Methods by Hyperspectral Imaging: a Case Study of the Laminated Sediments of Lake Linné (Svalbard).

    NASA Astrophysics Data System (ADS)

    Van Exem, A.; Debret, M.; Copard, Y.; Verpoorter, C.; Sorrel, P.; de Wet, G.; Werner, A.; Roof, S.; Laignel, B.; Retelle, M.

    2016-12-01

    Laminated sediments contained valuable information recorded on a micrometric scale. Information about sediments flux and origins require high-resolution source tracking analysis. Quick and non-destructive, hyperspectral imaging provides contiguous reflectance datasets into 2 dimensions with a spatial resolution of 0.02 mm. Located on the west of the Spitzbergen, Lake Linné is the largest lake in the region. Erosion is mainly driven by glacier fluctuations and three different bedrocks are potential sediment sources. Organic matter (coal) is only found in some carboniferous rocks. Four cores recovered from different parts of the lake contain millimeter scale laminae. Two approaches were compared: (i) measurement of statistical correlations between the sediments and source samples, (ii) extraction of extreme spectral signatures from the VNIR hyperspectral images. Total Organic Carbon (TOC) values of all samples were also given by bulk geochemistry (RE6 ® pyrolyzer). Consequently, the measured similarity between the hyperspectral image and the field samples illustrates the sources contribution within the core. Three sample clusters and three equivalent spectral signatures were found. TOC values from the archive show good correlation (r=0.86, p<0.001, n=73) with the hyperspectral signature relative to TOC content. A least-squares regression (r²=0.74) was used to extrapolate TOC values in order to represent their distribution at 0.02 mm resolution. This is the first source-to-sink study based on imaging spectroscopy. Our results indicate that hyperspectral imagery is a useful tool to (i) identify sediment sources, (ii) perform continuous paleo-environmental reconstruction at high resolution, and (iii) can provide quantitative results (TOC values) validated by destructive analyses.

  5. Integrated software environment based on COMKAT for analyzing tracer pharmacokinetics with molecular imaging.

    PubMed

    Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F

    2010-01-01

    An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.

  6. Automated analysis of high-content microscopy data with deep learning.

    PubMed

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  7. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  8. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    NASA Astrophysics Data System (ADS)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  9. Rapid analysis and exploration of fluorescence microscopy images.

    PubMed

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  10. Detection and Characterization of Exoplanets using Projections on Karhunen-Loeve Eigenimages: Forward Modeling

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent

    2016-01-01

    A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.

  11. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  12. Evaluation of sensor, environment and operational factors impacting the use of multiple sensor constellations for long term resource monitoring

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan

    Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.

  13. New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Technical Reports Server (NTRS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.; hide

    2012-01-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license

  14. Phase Imaging using Focusing Polycapillary Optics

    NASA Astrophysics Data System (ADS)

    Bashir, Sajid

    The interaction of X rays in diagnostic energy range with soft tissues can be described by Compton scattering and by the complex refractive index, which together characterize the attenuation properties of the tissue and the phase imparted to X rays passing through it. Many soft tissues exhibit extremely similar attenuation, so that their discrimination using conventional radiography, which generates contrast in an image through differential attenuation, is challenging. However, these tissues will impart phase differences significantly greater than attenuation differences to the X rays passing through them, so that phase-contrast imaging techniques can enable their discrimination. A major limitation to the widespread adoption of phase-contrast techniques is that phase contrast requires significant spatial coherence of the X-ray beam, which in turn requires specialized sources. For tabletop sources, this often requires a small (usually in the range of 10-50 micron) X-ray source. In this work, polycapillary optics were employed to create a small secondary source from a large spot rotating anode. Polycapillary optics consist of arrays of small hollow glass tubes through which X rays can be guided by total internal reflection from the tube walls. By tapering the tubes to guide the X rays to a point, they can be focused to a small spot which can be used as a secondary source. The polycapillary optic was first aligned with the X-ray source. The spot size was measured using a computed radiography image plate. Images were taken at a variety of optic-to-object and object-to-detector distances and phase-contrast edge enhancement was observed. Conventional absorption images were also acquired at a small object-to detector distances for comparison. Background division was performed to remove strong non-uniformity due to the optics. Differential phase contrast reconstruction demonstrates promising preliminary results. This manuscript is divided into six chapters. The second chapter describes the limitations of conventional imaging methods and benefits of the phase imaging. Chapter three covers different types of X-ray photon interactions with matter. Chapter four describes the experimental set-up and different types of images acquired along with their analysis. Chapter five summarizes the findings in this project and describes future work as well.

  15. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  16. Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.

    2009-05-01

    In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.

  17. Integrated system for automated financial document processing

    NASA Astrophysics Data System (ADS)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  18. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  19. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  20. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  1. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  2. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  3. Chandra follow up analysis on HESS J1841-055

    NASA Astrophysics Data System (ADS)

    Wilbert, Sven

    2012-07-01

    State of the art Imaging Atmospheric Cherenkow Telescopes (IACTs) like the Very Energetic Radiation Imaging Telescope Array System (VERITAS) and the High Energy Stereoscopic System (H.E.S.S) made surveys of the sky in order to discover new sources. The first and most famous is the H.E.S.S survey of the inner Galactic plane. So far more than 50 Galactic TeV Gamma-ray sources have been detected, a large number of which remain unidentified. HESS J1841-055 is one of the largest and most complex among these unidentified sources with an extension of approximately 1°. Follow up observations of the HESS J1841-055 region with Chandra, which is due to its high resolution good suited for searching for X-Ray counterparts and add-on analysis have revealed several X-ray sources spatially coincident with the multiple TeV emission peaks. The search for counterparts brought out the fact that not a single source itself but a bunch of sources of different nature, could be indeed the creators of this complex diffuse emission region; among them the SNR Kes 73, the pulsar within Kes 73, 1E 1841-45 and also the High Mass X-Ray Binary AX 184100.4-0536 and others.

  4. Use of a Machine Learning-Based High Content Analysis Approach to Identify Photoreceptor Neurite Promoting Molecules.

    PubMed

    Fuller, John A; Berlinicke, Cynthia A; Inglese, James; Zack, Donald J

    2016-01-01

    High content analysis (HCA) has become a leading methodology in phenotypic drug discovery efforts. Typical HCA workflows include imaging cells using an automated microscope and analyzing the data using algorithms designed to quantify one or more specific phenotypes of interest. Due to the richness of high content data, unappreciated phenotypic changes may be discovered in existing image sets using interactive machine-learning based software systems. Primary postnatal day four retinal cells from the photoreceptor (PR) labeled QRX-EGFP reporter mice were isolated, seeded, treated with a set of 234 profiled kinase inhibitors and then cultured for 1 week. The cells were imaged with an Acumen plate-based laser cytometer to determine the number and intensity of GFP-expressing, i.e. PR, cells. Wells displaying intensities and counts above threshold values of interest were re-imaged at a higher resolution with an INCell2000 automated microscope. The images were analyzed with an open source HCA analysis tool, PhenoRipper (Rajaram et al., Nat Methods 9:635-637, 2012), to identify the high GFP-inducing treatments that additionally resulted in diverse phenotypes compared to the vehicle control samples. The pyrimidinopyrimidone kinase inhibitor CHEMBL-1766490, a pan kinase inhibitor whose major known targets are p38α and the Src family member lck, was identified as an inducer of photoreceptor neuritogenesis by using the open-source HCA program PhenoRipper. This finding was corroborated using a cell-based method of image analysis that measures quantitative differences in the mean neurite length in GFP expressing cells. Interacting with data using machine learning algorithms may complement traditional HCA approaches by leading to the discovery of small molecule-induced cellular phenotypes in addition to those upon which the investigator is initially focusing.

  5. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Mixture Modeling for Background and Sources Separation in x-ray Astronomical Images

    NASA Astrophysics Data System (ADS)

    Guglielmetti, Fabrizia; Fischer, Rainer; Dose, Volker

    2004-11-01

    A probabilistic technique for the joint estimation of background and sources in high-energy astrophysics is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. The present analysis is applied to ROSAT PSPC data (0.1-2.4 keV) in Survey Mode. A background map is modelled using a Thin-Plate spline. Source probability maps are obtained for each pixel (45 arcsec) independently and for larger correlation lengths, revealing faint and extended sources. We will demonstrate that the described probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS) used for the production of the ROSAT All-Sky Survey (RASS) catalogues.

  7. Structure function monitor

    DOEpatents

    McGraw, John T [Placitas, NM; Zimmer, Peter C [Albuquerque, NM; Ackermann, Mark R [Albuquerque, NM

    2012-01-24

    Methods and apparatus for a structure function monitor provide for generation of parameters characterizing a refractive medium. In an embodiment, a structure function monitor acquires images of a pupil plane and an image plane and, from these images, retrieves the phase over an aperture, unwraps the retrieved phase, and analyzes the unwrapped retrieved phase. In an embodiment, analysis yields atmospheric parameters measured at spatial scales from zero to the diameter of a telescope used to collect light from a source.

  8. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  9. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  10. Joint source based morphometry identifies linked gray and white matter group differences.

    PubMed

    Xu, Lai; Pearlson, Godfrey; Calhoun, Vince D

    2009-02-01

    We present a multivariate approach called joint source based morphometry (jSBM), to identify linked gray and white matter regions which differ between groups. In jSBM, joint independent component analysis (jICA) is used to decompose preprocessed gray and white matter images into joint sources and statistical analysis is used to determine the significant joint sources showing group differences and their relationship to other variables of interest (e.g. age or sex). The identified joint sources are groupings of linked gray and white matter regions with common covariation among subjects. In this study, we first provide a simulation to validate the jSBM approach. To illustrate our method on real data, jSBM is then applied to structural magnetic resonance imaging (sMRI) data obtained from 120 chronic schizophrenia patients and 120 healthy controls to identify group differences. JSBM identified four joint sources as significantly associated with schizophrenia. Linked gray-white matter regions identified in each of the joint sources included: 1) temporal--corpus callosum, 2) occipital/frontal--inferior fronto-occipital fasciculus, 3) frontal/parietal/occipital/temporal--superior longitudinal fasciculus and 4) parietal/frontal--thalamus. Age effects on all four joint sources were significant, but sex effects were significant only for the third joint source. Our findings demonstrate that jSBM can exploit the natural linkage between gray and white matter by incorporating them into a unified framework. This approach is applicable to a wide variety of problems to study linked gray and white matter group differences.

  11. Managing an archive of weather satellite images

    NASA Technical Reports Server (NTRS)

    Seaman, R. L.

    1992-01-01

    The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.

  12. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  13. Multispectral imaging of the ocular fundus using light emitting diode illumination

    NASA Astrophysics Data System (ADS)

    Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  14. Multispectral imaging of the ocular fundus using light emitting diode illumination.

    PubMed

    Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  15. AMIDE: a free software tool for multimodality medical image analysis.

    PubMed

    Loening, Andreas Markus; Gambhir, Sanjiv Sam

    2003-07-01

    Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  16. Artifact correction and absolute radiometric calibration techniques employed in the Landsat 7 image assessment system

    USGS Publications Warehouse

    Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis

    1996-01-01

    The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.

  17. The influence of biological and technical factors on quantitative analysis of amyloid PET: Points to consider and recommendations for controlling variability in longitudinal data.

    PubMed

    Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William

    2015-09-01

    In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Imag(in)ing the University: Visual Sociology and Higher Education

    ERIC Educational Resources Information Center

    Metcalfe, Amy Scott

    2012-01-01

    This study examines the potential of visual sociology to expand our knowledge of higher education through the use of visual data sources and methods of analysis. Photographs and archival material form the basis of the study. The images were analyzed as being part of the initiation and fulfillment stages of the social construction of collective…

  19. Localizer: fast, accurate, open-source, and modular software package for superresolution microscopy

    PubMed Central

    Duwé, Sam; Neely, Robert K.; Zhang, Jin

    2012-01-01

    Abstract. We present Localizer, a freely available and open source software package that implements the computational data processing inherent to several types of superresolution fluorescence imaging, such as localization (PALM/STORM/GSDIM) and fluctuation imaging (SOFI/pcSOFI). Localizer delivers high accuracy and performance and comes with a fully featured and easy-to-use graphical user interface but is also designed to be integrated in higher-level analysis environments. Due to its modular design, Localizer can be readily extended with new algorithms as they become available, while maintaining the same interface and performance. We provide front-ends for running Localizer from Igor Pro, Matlab, or as a stand-alone program. We show that Localizer performs favorably when compared with two existing superresolution packages, and to our knowledge is the only freely available implementation of SOFI/pcSOFI microscopy. By dramatically improving the analysis performance and ensuring the easy addition of current and future enhancements, Localizer strongly improves the usability of superresolution imaging in a variety of biomedical studies. PMID:23208219

  20. The Digital Slide Archive: A Software Platform for Management, Integration, and Analysis of Histology for Cancer Research.

    PubMed

    Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D

    2017-11-01

    Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.

  1. Characterization of a neutron sensitive MCP/Timepix detector for quantitative image analysis at a pulsed neutron source

    NASA Astrophysics Data System (ADS)

    Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.

    2017-07-01

    The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.

  2. TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.

    PubMed

    Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas

    2013-07-15

    The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.

  3. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  4. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  5. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  6. Combined FDTD-Monte Carlo analysis and a novel design for ZnO scintillator rods in polycarbonate membrane for X-ray imaging

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar; Mohammadi, Mohammad

    2017-05-01

    A combination of Finite Difference Time Domain (FDTD) and Monte Carlo (MC) methods is proposed for simulation and analysis of ZnO microscintillators grown in polycarbonate membrane. A planar 10 keV X-ray source irradiating the detector is simulated by MC method, which provides the amount of absorbed X-ray energy in the assembly. The transport of generated UV scintillation light and its propagation in the detector was studied by the FDTD method. Detector responses to different probable scintillation sites and under different energies of X-ray source from 10 to 25 keV are reported. Finally, the tapered geometry for the scintillators is proposed, which shows enhanced spatial resolution in comparison to cylindrical geometry for imaging applications.

  7. A Complete Bank of Optical Images of the ICRF QSOs

    NASA Astrophysics Data System (ADS)

    Humberto Andrei, Alexandre; Taris, Francois; Anton, Sonia; Bourda, Geraldine; Damljanovic, Goran; Souchay, Jean; Vieira Martins, Roberto; Pursimo, Tapio; Barache, Christophe; Nepomuceno da Silva Neto, Dario; Fernandes Coelho, Bruno David

    2015-08-01

    We have been developing a systematic effort to collect good quality images of the optical counterpart of ICRF sources, in particular for those that have been regularly radio surveyed either for future implementation at high frequencies and/or those that will be the link sources between the ICRF and the Gaia CRF. Observations have been taken at the LNA/Brazil, CASLEO/Argentina, NOT/Spain, LFOA/Austria, Rozhen/Bulgária, and ASV/Serbia. In complement images were collected from the SDSS. As a step to implement such image data bank and make it publicly available through the IERS service we present its description, that comprises for each source the number of measurements, filter, pixel scale, size of field, and seeing at each observation. The photometry analysis is centered on the morphology, since there remain still cases in which the host galaxy is overwhelming, and many cases in which the host asks for a non-stellar PSF modeling. On basis of the neighbor stars we assign magnitudes and variability whenever possible. Finally, assisted by previous literature, the redshift and luminosity are used to derive astrophysical quantities, in special the absolute magnitude, SED and spectral index. Moreover, since Gaia will not obtain direct images of the observed sources, the morphology and magnitude becomes useful as templates onto which assembling and interpreting the one-dimensional and uncontinuous line spread function samplings that will be delivered by Gaia for each QSO.

  8. Thermal Nondestructive Characterization of Corrosion in Boiler Tubes by Application fo a Moving Line Heat Source

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.

    2000-01-01

    Wall thinning in utility boiler waterwall tubing is a significant inspection concern for boiler operators. Historically, conventional ultrasonics has been used lor inspection of these tubes. This technique has proved to be very labor intensive and slow. This has resulted in a "spot check" approach to inspections, making thickness measurements over a relatively small percentage of the total boiler wall area. NASA Langley Research Center has developed a thermal NDE technique designed to image and quantitatively characterize the amount of material thinning present in steel tubing. The technique involves the movement of a thermal line source across the outer surface of the tubing followed by an infrared imager at a fixed distance behind the line source. Quantitative images of the material loss due to corrosion are reconstructed from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to reconstruct images of flaws. The application of the thermal line source, coupled with this analysis technique, represents a significant improvement in the inspection speed for large structures such as boiler waterwalls while still providing high-resolution thickness measurements. A theoretical basis for the technique will be presented thus demonstrating the quantitative nature of the technique. Further, results of laboratory experiments on flat Panel specimens with fabricated material loss regions will be presented.

  9. A novel system for commissioning brachytherapy applicators: example of a ring applicator

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Van den Bosch, Michiel R.; Voncken, Robert; Podesta, Mark; Verhaegen, Frank

    2017-11-01

    A novel system was developed to improve commissioning and quality assurance of brachytherapy applicators used in high dose rate (HDR). It employs an imaging panel to create reference images and to measure dwell times and dwell positions. As an example: two ring applicators of the same model were evaluated. An applicator was placed on the surface of an imaging panel and a HDR 192Ir source was positioned in an imaging channel above the panel to generate an image of the applicator, using the gamma photons of the brachytherapy source. The applicator projection image was overlaid with the images acquired by capturing the gamma photons emitted by the source dwelling inside the applicator. We verified 0.1, 0.2, 0.5 and 1.0 cm interdwell distances for different offsets, applicator inclinations and transfer tube curvatures. The data analysis was performed using in-house developed software capable of processing the data in real time, defining catheters and creating movies recording the irradiation procedure. One applicator showed up to 0.3 cm difference from the expected position for a specific dwell position. The problem appeared intermittently. The standard deviations of the remaining dwell positions (40 measurements) were less than 0.05 cm. The second ring applicator had a similar reproducibility with absolute coordinate differences from expected values ranging from  -0.10 up to 0.18 cm. The curvature of the transfer tube can lead to differences larger than 0.1 cm whilst the inclination of the applicator showed a negligible effect. The proposed method allows the verification of all steps of the irradiation, providing accurate information about dwell positions and dwell times. It allows the verification of small interdwell positions (⩽0.1 cm) and reduces measurement time. In addition, no additional radiation source is necessary since the HDR 192Ir source is used to generate an image of the applicator.

  10. Source Plane Reconstruction of the Bright Lensed Galaxy RCSGA 032727-132609

    NASA Technical Reports Server (NTRS)

    Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Wuyts, Eva; Koester, Benjamin P.; Bayliss, Matthew B.; Barrientos, L. Felipe

    2011-01-01

    We present new HST/WFC3 imaging data of RCS2 032727-132609, a bright lensed galaxy at z=1.7 that is magnified and stretched by the lensing cluster RCS2 032727-132623. Using this new high-resolution imaging, we modify our previous lens model (which was based on ground-based data) to fully understand the lensing geometry, and use it to reconstruct the lensed galaxy in the source plane. This giant arc represents a unique opportunity to peer into 100-pc scale structures in a high redshift galaxy. This new source reconstruction will be crucial for a future analysis of the spatially-resolved rest-UV and rest-optical spectra of the brightest parts of the arc.

  11. Analysis of field of view limited by a multi-line X-ray source and its improvement for grating interferometry.

    PubMed

    Du, Yang; Huang, Jianheng; Lin, Danying; Niu, Hanben

    2012-08-01

    X-ray phase-contrast imaging based on grating interferometry is a technique with the potential to provide absorption, differential phase contrast, and dark-field signals simultaneously. The multi-line X-ray source used recently in grating interferometry has the advantage of high-energy X-rays for imaging of thick samples for most clinical and industrial investigations. However, it has a drawback of limited field of view (FOV), because of the axial extension of the X-ray emission area. In this paper, we analyze the effects of axial extension of the multi-line X-ray source on the FOV and its improvement in terms of Fresnel diffraction theory. Computer simulation results show that the FOV limitation can be overcome by use of an alternative X-ray tube with a specially designed multi-step anode. The FOV of this newly designed X-ray source can be approximately four times larger than that of the multi-line X-ray source in the same emission area. This might be beneficial for the applications of X-ray phase contrast imaging in materials science, biology, medicine, and industry.

  12. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ

    PubMed Central

    Müller, Marcel; Mönkemöller, Viola; Hennig, Simon; Hübner, Wolfgang; Huser, Thomas

    2016-01-01

    Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses. PMID:26996201

  13. CellProfiler and KNIME: open source tools for high content screening.

    PubMed

    Stöter, Martin; Niederlein, Antje; Barsacchi, Rico; Meyenhofer, Felix; Brandl, Holger; Bickle, Marc

    2013-01-01

    High content screening (HCS) has established itself in the world of the pharmaceutical industry as an essential tool for drug discovery and drug development. HCS is currently starting to enter the academic world and might become a widely used technology. Given the diversity of problems tackled in academic research, HCS could experience some profound changes in the future, mainly with more imaging modalities and smart microscopes being developed. One of the limitations in the establishment of HCS in academia is flexibility and cost. Flexibility is important to be able to adapt the HCS setup to accommodate the multiple different assays typical of academia. Many cost factors cannot be avoided, but the costs of the software packages necessary to analyze large datasets can be reduced by using Open Source software. We present and discuss the Open Source software CellProfiler for image analysis and KNIME for data analysis and data mining that provide software solutions which increase flexibility and keep costs low.

  14. Simultaneous multi-frequency imaging observations of solar microwave bursts

    NASA Technical Reports Server (NTRS)

    Kundu, M. R.; White, S. M.; Schmahl, E. J.

    1989-01-01

    The results of simultaneous two-frequency imaging observations of solar microwave bursts with the Very Large Array are reviewed. Simultaneous 2 and 6 cm observations have been made of bursts which are optically thin at both frequencies, or optically thick at the lower frequency. In the latter case, the source structure may differ at the two frequencies, but the two sources usually seem to be related. However, this is not always true of simultaneous 6 and 20 cm observations. The results have implications for the analysis of nonimaging radio data of solar and stellar flares.

  15. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  16. Using Image Attributes to Assure Accurate Particle Size and Count Using Nanoparticle Tracking Analysis.

    PubMed

    Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C

    2018-05-01

    Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.

  17. Noise analysis for near field 3-D FM-CW radar imaging systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheen, David M.

    2015-06-19

    Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of thesemore » noise sources on a fast-chirping FM-CW system.« less

  18. Dual-energy x-ray image decomposition by independent component analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang

    2001-09-01

    The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.

  19. Deep near-infrared survey of the Southern Sky (DENIS)

    NASA Technical Reports Server (NTRS)

    Deul, E.

    1992-01-01

    DENIS (Deep Near-Infrared Survey of the Southern Sky) will be the first complete census of astronomical sources in the near-infrared spectral range. The challenges of this novel survey are both scientific and technical. Phenomena radiating in the near-infrared range from brown dwarfs to galaxies in the early stages of cosmological evolution, the scientific exploitation of data relevant over such a wide range requires pooling expertise from several of the leading European astronomical centers. The technical challenges of a project which will provide an order of magnitude more sources than given by the IRAS space mission, and which will involve advanced data-handling and image-processing techniques, likewise require pooling of hardware and software resources, as well as of human expertise. The DENIS project team is composed of some 40 scientists, computer specialists, and engineers located in 5 European Community countries (France, Germany, Italy, The Netherlands, and Spain), with important contributions from specialists in Australia, Brazil, Chile, and Hungary. DENIS will survey the entire southern sky in 3 colors, namely in the I band at a wavelength of 0.8 micron, in the 1.25 micron J band, and in the 2.15 micron K' band. The sensitivity limits will be 18th magnitude in the I band, 16th in the J band, and 14.5th in the K' band. The angular resolution achieved will be 1 arcsecond in the I band, and 3.0 arcseconds in the J and K' bands. The European Southern Observatory 1 m telescope on La Silla will be dedicated to survey use during operations expected to last four years, commencing in late 1993. DENIS aims to provide the astronomical community with complete digitized infrared images of the full southern sky and a catalogue of extracted objects, both of the best quality and in readily accessible form. This will be achieved through dedicated software packages and specialized catalogues, and with assistance from the Leiden and Paris Data Analysis Centers. The data will be mailed on DAT tapes from La Silla to the two Data Analysis Centers for further processing. Two centers are necessary because of the shear quantity of data and because of the complementary roles the Centers will develop, each exploiting its own particular expertise. The Leiden Data Analysis Center (LDAC) will extract objects, establish their parameters, and archive them into a source catalogue. The LDAC will collaborate with the Groningen Space Research group that has gained experience in infrared image handling from the IRAS satellite. The Paris Data Analysis Center (PDAC) will be responsible for archiving and preprocessing the raw data to provide a homogeneous set of data suitable for further reduction in both the Leiden and Paris data analysis streams. The PDAC will also extract and archive images for the sources flagged by the LDAC as extended, and create a catalogue of galaxies. In exploiting the DENIS data we foresee the collaboration with other data analysis centers, such as the Observatoire de Lyon where the relevant DENIS catalogue of galaxies can be incorporated into their extragalactic database. The Point Sources and the Small Extended Sources catalogues could be incorporated in the Late Type Star database at Montpellier, and in the SIMBAD database as CDS. At Groningen the IRAS Point Source catalogue and/or image data can be merged with the DENIS catalogues. At Meudon algorithms and software will be developed with main goal assessing the limits reachable for the homogeneity and intrinsic consistency between the ensemble of the images in the data base (flat-fielding, relative positioning of the fields, bootstrapped flux calibration) but also for the data analysis.

  20. High-pitch dual-source CT angiography without ECG-gating for imaging the whole aorta: intraindividual comparison with standard pitch single-source technique without ECG-gating

    PubMed Central

    Manna, Carmelinda; Silva, Mario; Cobelli, Rocco; Poggesi, Sara; Rossi, Cristina; Sverzellati, Nicola

    2017-01-01

    PURPOSE We aimed to perform intraindividual comparison of computed tomography (CT) parameters, image quality, and radiation exposure between standard CT angiography (CTA) and high-pitch dual source (DS)-CTA, in subjects undergoing serial CTA of thoracoabdominal aorta. METHODS Eighteen subjects with thoracoabdominal CTA by standard technique and high-pitch DS-CTA technique within 6 months of each other were retrieved for intraindividual comparison of image quality in thoracic and abdominal aorta. Quantitative analysis was performed by comparison of mean aortic attenuation, noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Qualitative analysis was performed by visual assessment of motion artifacts and diagnostic confidence. Radiation exposure was quantified by effective dose. Image quality was apportioned to radiation exposure by means of figure of merit. RESULTS Mean aortic attenuation and noise were higher in high-pitch DS-CTA of thoracoabdominal aorta, whereas SNR and CNR were similar in thoracic aorta and significantly lower in high-pitch DS-CTA of abdominal aorta (P = 0.024 and P = 0.016). High-pitch DS-CTA was significantly better in the first segment of thoracic aorta. Effective dose was reduced by 72% in high-pitch DS-CTA. CONCLUSION High-pitch DS-CTA without electrocardiography-gating is an effective technique for imaging aorta with very low radiation exposure and with significant reduction of motion artifacts in ascending aorta; however, the overall quality of high-pitch DS-CTA in abdominal aorta is lower than standard CTA. PMID:28703104

  1. Continuous EEG source imaging enhances analysis of EEG-fMRI in focal epilepsy.

    PubMed

    Vulliemoz, S; Rodionov, R; Carmichael, D W; Thornton, R; Guye, M; Lhatoo, S D; Michel, C M; Duncan, J S; Lemieux, L

    2010-02-15

    EEG-correlated fMRI (EEG-fMRI) studies can reveal haemodynamic changes associated with Interictal Epileptic Discharges (IED). Methodological improvements are needed to increase sensitivity and specificity for localising the epileptogenic zone. We investigated whether the estimated EEG source activity improved models of the BOLD changes in EEG-fMRI data, compared to conventional < event-related > designs based solely on the visual identification of IED. Ten patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI. EEG Source Imaging (ESI) was performed on intra-fMRI averaged IED to identify the irritative zone. The continuous activity of this estimated IED source (cESI) over the entire recording was used for fMRI analysis (cESI model). The maps of BOLD signal changes explained by cESI were compared to results of the conventional IED-related model. ESI was concordant with non-invasive data in 13/15 different types of IED. The cESI model explained significant additional BOLD variance in regions concordant with video-EEG, structural MRI or, when available, intracranial EEG in 10/15 IED. The cESI model allowed better detection of the BOLD cluster, concordant with intracranial EEG in 4/7 IED, compared to the IED model. In 4 IED types, cESI-related BOLD signal changes were diffuse with a pattern suggestive of contamination of the source signal by artefacts, notably incompletely corrected motion and pulse artefact. In one IED type, there was no significant BOLD change with either model. Continuous EEG source imaging can improve the modelling of BOLD changes related to interictal epileptic activity and this may enhance the localisation of the irritative zone. Copyright 2009 Elsevier Inc. All rights reserved.

  2. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  3. Hyperspectral fluorescence imaging coupled with multivariate image analysis techniques for contaminant screening of leafy greens

    NASA Astrophysics Data System (ADS)

    Everard, Colm D.; Kim, Moon S.; Lee, Hoyoung

    2014-05-01

    The production of contaminant free fresh fruit and vegetables is needed to reduce foodborne illnesses and related costs. Leafy greens grown in the field can be susceptible to fecal matter contamination from uncontrolled livestock and wild animals entering the field. Pathogenic bacteria can be transferred via fecal matter and several outbreaks of E.coli O157:H7 have been associated with the consumption of leafy greens. This study examines the use of hyperspectral fluorescence imaging coupled with multivariate image analysis to detect fecal contamination on Spinach leaves (Spinacia oleracea). Hyperspectral fluorescence images from 464 to 800 nm were captured; ultraviolet excitation was supplied by two LED-based line light sources at 370 nm. Key wavelengths and algorithms useful for a contaminant screening optical imaging device were identified and developed, respectively. A non-invasive screening device has the potential to reduce the harmful consequences of foodborne illnesses.

  4. An efficient approach to integrated MeV ion imaging.

    PubMed

    Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M

    2018-03-01

    An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Exact image theory for the problem of dielectric/magnetic slab

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Exact image method, recently introduced for the exact solution of electromagnetic field problems involving homogeneous half spaces and microstrip-like geometries, is developed for the problem of homogeneous slab of dielectric and/or magnetic material in free space. Expressions for image sources, creating the exact reflected and transmitted fields, are given and their numerical evaluation is demonstrated. Nonradiating modes, guided by the slab and responsible for the loss of convergence of the image functions, are considered and extracted. The theory allows, for example, an analysis of finite ground planes in microstrip antenna structures.

  6. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  7. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells.

    PubMed

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.

  8. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells

    PubMed Central

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569

  9. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses.

    PubMed

    Bokhart, Mark T; Nazari, Milad; Garrard, Kenneth P; Muddiman, David C

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. Graphical Abstract ᅟ.

  10. Integrating legacy tools and data sources

    DOT National Transportation Integrated Search

    1999-01-01

    Under DARPA and internal funding, Lockheed Martin has been researching information needs profiling to manage information dissemination as applied to logistics, image analysis and exploitation, and battlefield information management. We have demonstra...

  11. Coherent soft X-ray diffraction imaging of coliphage PR772 at the Linac coherent light source

    PubMed Central

    Reddy, Hemanth K.N.; Yoon, Chun Hong; Aquila, Andrew; Awel, Salah; Ayyer, Kartik; Barty, Anton; Berntsen, Peter; Bielecki, Johan; Bobkov, Sergey; Bucher, Maximilian; Carini, Gabriella A.; Carron, Sebastian; Chapman, Henry; Daurer, Benedikt; DeMirci, Hasan; Ekeberg, Tomas; Fromme, Petra; Hajdu, Janos; Hanke, Max Felix; Hart, Philip; Hogue, Brenda G.; Hosseinizadeh, Ahmad; Kim, Yoonhee; Kirian, Richard A.; Kurta, Ruslan P.; Larsson, Daniel S.D.; Duane Loh, N.; Maia, Filipe R.N.C.; Mancuso, Adrian P.; Mühlig, Kerstin; Munke, Anna; Nam, Daewoong; Nettelblad, Carl; Ourmazd, Abbas; Rose, Max; Schwander, Peter; Seibert, Marvin; Sellberg, Jonas A.; Song, Changyong; Spence, John C.H.; Svenda, Martin; Van der Schot, Gijs; Vartanyants, Ivan A.; Williams, Garth J.; Xavier, P. Lourdu

    2017-01-01

    Single-particle diffraction from X-ray Free Electron Lasers offers the potential for molecular structure determination without the need for crystallization. In an effort to further develop the technique, we present a dataset of coherent soft X-ray diffraction images of Coliphage PR772 virus, collected at the Atomic Molecular Optics (AMO) beamline with pnCCD detectors in the LAMP instrument at the Linac Coherent Light Source. The diameter of PR772 ranges from 65–70 nm, which is considerably smaller than the previously reported ~600 nm diameter Mimivirus. This reflects continued progress in XFEL-based single-particle imaging towards the single molecular imaging regime. The data set contains significantly more single particle hits than collected in previous experiments, enabling the development of improved statistical analysis, reconstruction algorithms, and quantitative metrics to determine resolution and self-consistency. PMID:28654088

  12. Coherent soft X-ray diffraction imaging of coliphage PR772 at the Linac coherent light source

    DOE PAGES

    Reddy, Hemanth K. N.; Yoon, Chun Hong; Aquila, Andrew; ...

    2017-06-27

    Single-particle diffraction from X-ray Free Electron Lasers offers the potential for molecular structure determination without the need for crystallization. In an effort to further develop the technique, we present a dataset of coherent soft X-ray diffraction images of Coliphage PR772 virus, collected at the Atomic Molecular Optics (AMO) beamline with pnCCD detectors in the LAMP instrument at the Linac Coherent Light Source. The diameter of PR772 ranges from 65–70 nm, which is considerably smaller than the previously reported ~600 nm diameter Mimivirus. This reflects continued progress in XFEL-based single-particle imaging towards the single molecular imaging regime. As a result, themore » data set contains significantly more single particle hits than collected in previous experiments, enabling the development of improved statistical analysis, reconstruction algorithms, and quantitative metrics to determine resolution and self-consistency.« less

  13. A Study on the Application of Normalized Point Source Sensitivity in Wide Field Optical Spectrometer of the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Chen, Li-si; Hu, Zhong-wen

    2017-10-01

    The image evaluation of an optical system is the core of optical design. Based on the analysis and comparison of the PSSN (Normalized Point Source Sensitivity) proposed in the image evaluation of the TMT (Thirty Meter Telescope) and the common image evaluation methods, the application of PSSN in the TMT WFOS (Wide Field Optical Spectrometer) is studied. It includes an approximate simulation of the atmospheric seeing, the effect of the displacement of M3 in the TMT on the PSSN of the system, the effect of the displacement of collimating mirror in the WFOS on the PSSN of the system, the relations between the PSSN and the zenith angle under different conditions of atmospheric turbulence, and the relation between the PSSN and the wavefront aberration. The results show that the PSSN is effective for the image evaluation of the TMT under a limited atmospheric seeing.

  14. Design of light guide sleeve on hyperspectral imaging system for skin diagnosis

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Chang, Chao-Hsin; Huang, Ting-Wei; Chiang, Hou-Chi; Wu, Jeng-Fu; Ou-Yang, Mang

    2017-08-01

    A hyperspectral imaging system is proposed for early study of skin diagnosis. A stable and high hyperspectral image quality is important for analysis. Therefore, a light guide sleeve (LGS) was designed for the embedded on a hyperspectral imaging system. It provides a uniform light source on the object plane with the determined distance. Furthermore, it can shield the ambient light from entering the system and increasing noise. For the purpose of producing a uniform light source, the LGS device was designed in the symmetrical double-layered structure. It has light cut structures to adjust distribution of rays between two layers and has the Lambertian surface in the front-end to promote output uniformity. In the simulation of the design, the uniformity of illuminance was about 91.7%. In the measurement of the actual light guide sleeve, the uniformity of illuminance was about 92.5%.

  15. Coherent soft X-ray diffraction imaging of coliphage PR772 at the Linac coherent light source.

    PubMed

    Reddy, Hemanth K N; Yoon, Chun Hong; Aquila, Andrew; Awel, Salah; Ayyer, Kartik; Barty, Anton; Berntsen, Peter; Bielecki, Johan; Bobkov, Sergey; Bucher, Maximilian; Carini, Gabriella A; Carron, Sebastian; Chapman, Henry; Daurer, Benedikt; DeMirci, Hasan; Ekeberg, Tomas; Fromme, Petra; Hajdu, Janos; Hanke, Max Felix; Hart, Philip; Hogue, Brenda G; Hosseinizadeh, Ahmad; Kim, Yoonhee; Kirian, Richard A; Kurta, Ruslan P; Larsson, Daniel S D; Duane Loh, N; Maia, Filipe R N C; Mancuso, Adrian P; Mühlig, Kerstin; Munke, Anna; Nam, Daewoong; Nettelblad, Carl; Ourmazd, Abbas; Rose, Max; Schwander, Peter; Seibert, Marvin; Sellberg, Jonas A; Song, Changyong; Spence, John C H; Svenda, Martin; Van der Schot, Gijs; Vartanyants, Ivan A; Williams, Garth J; Xavier, P Lourdu

    2017-06-27

    Single-particle diffraction from X-ray Free Electron Lasers offers the potential for molecular structure determination without the need for crystallization. In an effort to further develop the technique, we present a dataset of coherent soft X-ray diffraction images of Coliphage PR772 virus, collected at the Atomic Molecular Optics (AMO) beamline with pnCCD detectors in the LAMP instrument at the Linac Coherent Light Source. The diameter of PR772 ranges from 65-70 nm, which is considerably smaller than the previously reported ~600 nm diameter Mimivirus. This reflects continued progress in XFEL-based single-particle imaging towards the single molecular imaging regime. The data set contains significantly more single particle hits than collected in previous experiments, enabling the development of improved statistical analysis, reconstruction algorithms, and quantitative metrics to determine resolution and self-consistency.

  16. Using Cell-ID 1.4 with R for Microscope-Based Cytometry

    PubMed Central

    Bush, Alan; Chernomoretz, Ariel; Yu, Richard; Gordon, Andrew

    2012-01-01

    This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMID:23026908

  17. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  18. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  19. MANTiS: a program for the analysis of X-ray spectromicroscopy data.

    PubMed

    Lerotic, Mirna; Mak, Rachel; Wirick, Sue; Meirer, Florian; Jacobsen, Chris

    2014-09-01

    Spectromicroscopy combines spectral data with microscopy, where typical datasets consist of a stack of images taken across a range of energies over a microscopic region of the sample. Manual analysis of these complex datasets can be time-consuming, and can miss the important traits in the data. With this in mind we have developed MANTiS, an open-source tool developed in Python for spectromicroscopy data analysis. The backbone of the package involves principal component analysis and cluster analysis, classifying pixels according to spectral similarity. Our goal is to provide a data analysis tool which is comprehensive, yet intuitive and easy to use. MANTiS is designed to lead the user through the analysis using story boards that describe each step in detail so that both experienced users and beginners are able to analyze their own data independently. These capabilities are illustrated through analysis of hard X-ray imaging of iron in Roman ceramics, and soft X-ray imaging of a malaria-infected red blood cell.

  20. Kinetic Analysis of Amylase Using Quantitative Benedict's and Iodine Starch Reagents

    ERIC Educational Resources Information Center

    Cochran, Beverly; Lunday, Deborah; Miskevich, Frank

    2008-01-01

    Quantitative analysis of carbohydrates is a fundamental analytical tool used in many aspects of biology and chemistry. We have adapted a technique developed by Mathews et al. using an inexpensive scanner and open-source image analysis software to quantify amylase activity using both the breakdown of starch and the appearance of glucose. Breakdown…

  1. Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis

    NASA Technical Reports Server (NTRS)

    Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.

    2004-01-01

    This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.

  2. Atlas-Guided Segmentation of Vervet Monkey Brain MRI

    PubMed Central

    Fedorov, Andriy; Li, Xiaoxing; Pohl, Kilian M; Bouix, Sylvain; Styner, Martin; Addicott, Merideth; Wyatt, Chris; Daunais, James B; Wells, William M; Kikinis, Ron

    2011-01-01

    The vervet monkey is an important nonhuman primate model that allows the study of isolated environmental factors in a controlled environment. Analysis of monkey MRI often suffers from lower quality images compared with human MRI because clinical equipment is typically used to image the smaller monkey brain and higher spatial resolution is required. This, together with the anatomical differences of the monkey brains, complicates the use of neuroimage analysis pipelines tuned for human MRI analysis. In this paper we developed an open source image analysis framework based on the tools available within the 3D Slicer software to support a biological study that investigates the effect of chronic ethanol exposure on brain morphometry in a longitudinally followed population of male vervets. We first developed a computerized atlas of vervet monkey brain MRI, which was used to encode the typical appearance of the individual brain structures in MRI and their spatial distribution. The atlas was then used as a spatial prior during automatic segmentation to process two longitudinal scans per subject. Our evaluation confirms the consistency and reliability of the automatic segmentation. The comparison of atlas construction strategies reveals that the use of a population-specific atlas leads to improved accuracy of the segmentation for subcortical brain structures. The contribution of this work is twofold. First, we describe an image processing workflow specifically tuned towards the analysis of vervet MRI that consists solely of the open source software tools. Second, we develop a digital atlas of vervet monkey brain MRIs to enable similar studies that rely on the vervet model. PMID:22253661

  3. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  4. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  5. Report for 2012 from the Bordeaux IVS Analysis Center

    NASA Technical Reports Server (NTRS)

    Charlot, Patrick; Bellanger, Antoine; Bouffet, Romuald; Bourda, Geraldine; Collioud, Arnaud; Baudry, Alain

    2013-01-01

    This report summarizes the activities of the Bordeaux IVS Analysis Center during the year 2012. The work focused on (i) regular analysis of the IVS-R1 and IVS-R4 sessions with the GINS software package; (ii) systematic VLBI imaging of the RDV sessions and calculation of the corresponding source structure index and compactness values; (iii) investigation of the correlation between astrometric position instabilities and source structure variations; and (iv) continuation of our VLBI observational program to identify optically-bright radio sources suitable for the link with the future Gaia frame. Also of importance is the 11th European VLBI Network Symposium, which we organized last October in Bordeaux and which drew much attention from the European and International VLBI communities.

  6. THE CELESTIAL REFERENCE FRAME AT 24 AND 43 GHz. II. IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charlot, P.; Boboltz, D. A.; Fey, A. L.

    2010-05-15

    We have measured the submilliarcsecond structure of 274 extragalactic sources at 24 and 43 GHz in order to assess their astrometric suitability for use in a high-frequency celestial reference frame (CRF). Ten sessions of observations with the Very Long Baseline Array have been conducted over the course of {approx}5 years, with a total of 1339 images produced for the 274 sources. There are several quantities that can be used to characterize the impact of intrinsic source structure on astrometric observations including the source flux density, the flux density variability, the source structure index, the source compactness, and the compactness variability.more » A detailed analysis of these imaging quantities shows that (1) our selection of compact sources from 8.4 GHz catalogs yielded sources with flux densities, averaged over the sessions in which each source was observed, of about 1 Jy at both 24 and 43 GHz, (2) on average the source flux densities at 24 GHz varied by 20%-25% relative to their mean values, with variations in the session-to-session flux density scale being less than 10%, (3) sources were found to be more compact with less intrinsic structure at higher frequencies, and (4) variations of the core radio emission relative to the total flux density of the source are less than 8% on average at 24 GHz. We conclude that the reduction in the effects due to source structure gained by observing at higher frequencies will result in an improved CRF and a pool of high-quality fiducial reference points for use in spacecraft navigation over the next decade.« less

  7. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  8. Stray light characteristics of the diffractive telescope system

    NASA Astrophysics Data System (ADS)

    Liu, Dun; Wang, Lihua; Yang, Wei; Wu, Shibin; Fan, Bin; Wu, Fan

    2018-02-01

    Diffractive telescope technology is an innovation solution in construction of large light-weight space telescope. However, the nondesign orders of diffractive optical elements (DOEs) may affect the imaging performance as stray light. To study the stray light characteristics of a diffractive telescope, a prototype was developed and its stray light analysis model was established. The stray light characteristics including ghost, point source transmittance, and veiling glare index (VGI) were analyzed. During the star imaging test of the prototype, the ghost images appeared around the star image as the exposure time of the charge-coupled device improving, consistent with the simulation results. The test result of VGI was 67.11%, slightly higher than the calculated value 57.88%. The study shows that the same order diffraction of the diffractive primary lens and correcting DOE is the main factor that causes ghost images. The stray light sources outside the field of view can illuminate the image plane through nondesign orders diffraction of the primary lens and contributes to more than 90% of the stray light flux on the image plane. In summary, it is expected that these works will provide some guidance for optimizing the imaging performance of diffractive telescopes.

  9. Indigenous obesity in the news: a media analysis of news representation of obesity in Australia's Indigenous population.

    PubMed

    Islam, Salwa; Fitzgerald, Lisa

    2016-01-01

    High rates of obesity are a significant issue amongst Indigenous populations in many countries around the world. Media framing of issues can play a critical role in shaping public opinion and government policy. A broad range of media analyses have been conducted on various aspects of obesity, however media representation of Indigenous obesity remains unexplored. In this study we investigate how obesity in Australia's Indigenous population is represented in newsprint media coverage. Media articles published between 2007 and 2014 were analysed for the distribution and extent of coverage over time and across Indigenous and mainstream media sources using quantitative content analysis. Representation of the causes and solutions of Indigenous obesity and framing in text and image content was examined using qualitative framing analysis. Media coverage of Indigenous obesity was very limited with no clear trends in reporting over time or across sources. The single Indigenous media source was the second largest contributor to the media discourse of this issue. Structural causes/origins were most often cited and individual solutions were comparatively overrepresented. A range of frames were employed across the media sources. All images reinforced textual framing except for one article where the image depicted individual factors whereas the text referred to structural determinants. This study provides a starting point for an important area of research that needs further investigation. The findings highlight the importance of alternative news media outlets, such as The Koori Mail, and that these should be developed to enhance the quality and diversity of media coverage. Media organisations can actively contribute to improving Indigenous health through raising awareness, evidence-based balanced reporting, and development of closer ties with Indigenous health workers.

  10. Perceptual reversals during binocular rivalry: ERP components and their concomitant source differences.

    PubMed

    Britz, Juliane; Pitts, Michael A

    2011-11-01

    We used an intermittent stimulus presentation to investigate event-related potential (ERP) components associated with perceptual reversals during binocular rivalry. The combination of spatiotemporal ERP analysis with source imaging and statistical parametric mapping of the concomitant source differences yielded differences in three time windows: reversals showed increased activity in early visual (∼120 ms) and in inferior frontal and anterior temporal areas (∼400-600 ms) and decreased activity in the ventral stream (∼250-350 ms). The combination of source imaging and statistical parametric mapping suggests that these differences were due to differences in generator strength and not generator configuration, unlike the initiation of reversals in right inferior parietal areas. These results are discussed within the context of the extensive network of brain areas that has been implicated in the initiation, implementation, and appraisal of bistable perceptual reversals. Copyright © 2011 Society for Psychophysiological Research.

  11. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Spectral analysis of the Crab Nebula and GRB 160530A with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Sleator, Clio; Boggs, Steven E.; Chiu, Jeng-Lun; Kierans, Carolyn; Lowell, Alexander; Tomsick, John; Zoglauer, Andreas; Amman, Mark; Chang, Hsiang-Kuang; Tseng, Chao-Hsiung; Yang, Chien-Ying; Lin, Chih H.; Jean, Pierre; von Ballmoos, Peter

    2017-08-01

    The Compton Spectrometer and Imager (COSI) is a balloon-borne soft gamma-ray (0.2-5 MeV) telescope designed to study astrophysical sources including gamma-ray bursts and compact objects. As a compact Compton telescope, COSI has inherent sensitivity to polarization. COSI utilizes 12 germanium detectors to provide excellent spectral resolution. On May 17, 2016, COSI was launched from Wanaka, New Zealand and completed a successful 46-day flight on NASA’s new Superpressure balloon. To perform spectral analysis with COSI, we have developed an accurate instrument model as required for the response matrix. With carefully chosen background regions, we are able to fit the background-subtracted spectra in XSPEC. We have developed a model of the atmosphere above COSI based on the NRLMSISE-00 Atmosphere Model to include in our spectral fits. The Crab and GRB 160530A are among the sources detected during the 2016 flight. We present spectral analysis of these two point sources. Our GRB 160530A results are consistent with those from other instruments, confirming COSI’s spectral abilities. Furthermore, we discuss prospects for measuring the Crab polarization with COSI.

  13. Sensor-based architecture for medical imaging workflow analysis.

    PubMed

    Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis

    2014-08-01

    The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.

  14. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Shabbir, Faizan; Gong, Chao

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less

  15. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    PubMed

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.

  16. Multi-Task Linear Programming Discriminant Analysis for the Identification of Progressive MCI Individuals

    PubMed Central

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966

  17. SU-C-17A-02: Sirius MRI Markers for Prostate Post-Implant Assessment: MR Protocol Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, T; Wang, J; Kudchadker, R

    Purpose: Currently, CT is used to visualize prostate brachytherapy sources, at the expense of accurate structure contouring. MRI is superior to CT for anatomical delineation, but the sources appear as voids on MRI images. Previously we have developed Sirius MRI markers (C4 Imaging) to replace spacers to assist source localization on MRI images. Here we develop an MRI pulse sequence protocol that enhances the signal of these markers to enable MRI-only post-implant prostate dosimetric analysis. Methods: To simulate a clinical scenario, a CIRS multi-modality prostate phantom was implanted with 66 markers and 86 sources. The implanted phantom was imaged onmore » both 1.5T and 3.0T GE scanners under various conditions, different pulse sequences (2D fast spin echo [FSE], 3D balanced steadystate free precession [bSSFP] and 3D fast spoiled gradient echo [FSPGR]), as well as varying amount of padding to simulate various patient sizes and associated signal fall-off from the surface coil elements. Standard FSE sequences from the current clinical protocols were also evaluated. Marker visibility, marker size, intra-marker distance, total scan time and artifacts were evaluated for various combinations of echo time, repetition time, flip angle, number of excitations, bandwidth, slice thickness and spacing, fieldof- view, frequency/phase encoding steps and frequency direction. Results: We have developed a 3D FSPGR pulse sequence that enhances marker signal and ensures the integrity of the marker shape while maintaining reasonable scan time. For patients contraindicated for 3.0T, we have also developed a similar sequence for 1.5T scanners. Signal fall-off with distance from prostate to coil can be compensated mainly by decreasing bandwidth. The markers are not visible using standard FSE sequences. FSPGR sequences are more robust for consistent marker visualization as compared to bSSFP sequences. Conclusion: The developed MRI pulse sequence protocol for Sirius MRI markers assists source localization to enable MRIonly post-implant prostate dosimetric analysis. S.J. Frank is a co-founder of C4 Imaging (manufactures the MRI markers)« less

  18. Thermographic Imaging of Material Loss in Boiler Water-Wall Tubing by Application of Scanning Line Source

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.

    2000-01-01

    Localized wall thinning due to corrosion in utility boiler water-wall tubing is a significant inspection concern for boiler operators. Historically, conventional ultrasonics has been used for inspection of these tubes. This technique has proven to be very manpower and time intensive. This has resulted in a spot check approach to inspections, documenting thickness measurements over a relatively small percentage of the total boiler wall area. NASA Langley Research Center has developed a thermal NDE technique designed to image and quantitatively characterize the amount of material thinning present in steel tubing. The technique involves the movement of a thermal line source across the outer surface of the tubing followed by an infrared imager at a fixed distance behind the line source. Quantitative images of the material loss due to corrosion are reconstructed from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to reconstruct images of flaws. The application of the thermal line source coupled with the analysis technique represents a significant improvement in the inspection speed for large structures such as boiler water-walls. A theoretical basis for the technique will be presented which explains the quantitative nature of the technique. Further, a dynamic calibration system will be presented for the technique that allows the extraction of thickness information from the temperature data. Additionally, the results of applying this technology to actual water-wall tubing samples and in situ inspections will be presented.

  19. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Davies, L. J. M.; Driver, S. P.; Koushan, S.; Taranu, D. S.; Casura, S.; Liske, J.

    2018-05-01

    We introduce PROFOUND, a source finding and image analysis package. PROFOUND provides methods to detect sources in noisy images, generate segmentation maps identifying the pixels belonging to each source, and measure statistics like flux, size, and ellipticity. These inputs are key requirements of PROFIT, our recently released galaxy profiling package, where the design aim is that these two software packages will be used in unison to semi-automatically profile large samples of galaxies. The key novel feature introduced in PROFOUND is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. We apply PROFOUND in a number of simulated and real-world cases, and demonstrate that it behaves reasonably given its stated design goals. In particular, it offers good initial parameter estimation for PROFIT, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the PROFOUND and PROFIT pipeline, and adoption is being encouraged by publicly releasing the software for the open source R data analysis platform under an LGPL-3 license on GitHub (github.com/asgr/ProFound).

  20. Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis

    NASA Astrophysics Data System (ADS)

    Moridnejad, A.; Karimi, N.; Ariya, P. A.

    2014-12-01

    The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.

  1. Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel

    DOE PAGES

    Collette, R.; Douglas, J.; Patterson, L.; ...

    2015-05-01

    Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less

  2. Joint source based morphometry identifies linked gray and white matter group differences

    PubMed Central

    Xu, Lai; Pearlson, Godfrey; Calhoun, Vince D.

    2009-01-01

    We present a multivariate approach called joint source based morphometry (jSBM), to identify linked gray and white matter regions which differ between groups. In jSBM, joint independent component analysis (jICA) is used to decompose preprocessed gray and white matter images into joint sources and statistical analysis is used to determine the significant joint sources showing group differences and their relationship to other variables of interest (e.g. age or sex). The identified joint sources are groupings of linked gray and white matter regions with common covariation among subjects. In this study, we first provide a simulation to validate the jSBM approach. To illustrate our method on real data, jSBM is then applied to structural magnetic resonance imaging (sMRI) data obtained from 120 chronic schizophrenia patients and 120 healthy controls to identify group differences. JSBM identified four joint sources as significantly associated with schizophrenia. Linked gray–white matter regions identified in each of the joint sources included: 1) temporal — corpus callosum, 2) occipital/frontal — inferior fronto-occipital fasciculus, 3) frontal/parietal/occipital/temporal —superior longitudinal fasciculus and 4) parietal/frontal — thalamus. Age effects on all four joint sources were significant, but sex effects were significant only for the third joint source. Our findings demonstrate that jSBM can exploit the natural linkage between gray and white matter by incorporating them into a unified framework. This approach is applicable to a wide variety of problems to study linked gray and white matter group differences. PMID:18992825

  3. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography

    PubMed Central

    Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.

    2010-01-01

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566

  4. Effects of photon noise on speckle image reconstruction with the Knox-Thompson algorithm. [in astronomy

    NASA Technical Reports Server (NTRS)

    Nisenson, P.; Papaliolios, C.

    1983-01-01

    An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

  5. Preliminary analysis of the sensitivity of AIRSAR images to soil moisture variations

    NASA Technical Reports Server (NTRS)

    Pardipuram, Rajan; Teng, William L.; Wang, James R.; Engman, Edwin T.

    1993-01-01

    Synthetic Aperture Radar (SAR) images acquired from various sources such as Shuttle Imaging Radar B (SIR-B) and airborne SAR (AIRSAR) have been analyzed for signatures of soil moisture. The SIR-B measurements have shown a strong correlation between measurements of surface soil moisture (0-5 cm) and the radar backscattering coefficient sigma(sup o). The AIRSAR measurements, however, indicated a lower sensitivity. In this study, an attempt has been made to investigate the causes for this reduced sensitivity.

  6. Repeatability of diagnostic ultrasonography in the assessment of the equine superficial digital flexor tendon.

    PubMed

    Pickersgill, C H; Marr, C M; Reid, S W

    2001-01-01

    A quantitative investigation of the variation that can occur during the course of ultrasonography of the equine superficial digital flexor tendons (SDFT) was undertaken. The aim of this investigation was to use an objective measure, namely the measurement of CSA, to quantify the variability occurring during the course of the ultrasonographic assessment of the equine SDFT. The effects of 3 variables on the CSA measurements were determined. 1) Image acquisition operator (IAc): two different operators undertaking the ultrasonographic examination; 2) image analysis operator (IAn): two different operators undertaking the calculation of CSA values from previously stored images; and 3) analytical equipment (used during CSA measurement) (IEq): the use of 2 different sets of equipment during calculation of CSA values. Tendon cross-sectional area (CSA) measurements were used as the comparative variable of 3 potential sources: interoperator, during image acquisition; interoperator, during CSA measurement; and intraoperator, when using different analytical equipment. Two operators obtained transverse ultrasonographic images from the forelimb SDFTs of 16 National Hunt (NH) Thoroughbred (TB) racehorses, each undertaking analysis of their own and the other operator's images. One operator undertook analysis of their images using 2 sets of equipment. There was no statistically significant difference in the results obtained when different operators undertook image acquisition (P>0.05). At all but the most distal level, there was no significant difference when different equipment was used during analysis (P>0.05). A significant difference (P<0.01) was reported when different operators undertook image analysis, one operator consistently returning larger measurements. Different operators undertaking different stages of an examination can result in significant variability. To reduce confounding during ultrasonographic investigations involving multiple persons, one operator should undertake image analysis, although different operators may undertake image acquisition.

  7. Weak-lensing shear estimates with general adaptive moments, and studies of bias by pixellation, PSF distortions, and noise

    NASA Astrophysics Data System (ADS)

    Simon, Patrick; Schneider, Peter

    2017-08-01

    In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this prior bias. With regard to a fully-Bayesian lensing analysis, we point out that passing tests with source samples subject to constant shear may not be sufficient for an analysis of sources with varying shear.

  8. Design of FPGA ICA for hyperspectral imaging processing

    NASA Astrophysics Data System (ADS)

    Nordin, Anis; Hsu, Charles C.; Szu, Harold H.

    2001-03-01

    The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.

  9. Determination of the effect of source intensity profile on speckle contrast using coherent spatial frequency domain imaging

    PubMed Central

    Rice, Tyler B.; Konecky, Soren D.; Owen, Christopher; Choi, Bernard; Tromberg, Bruce J.

    2012-01-01

    Laser Speckle Imaging (LSI) is fast, noninvasive technique to image particle dynamics in scattering media such as biological tissue. While LSI measurements are independent of the overall intensity of the laser source, we find that spatial variations in the laser source profile can impact measured flow rates. This occurs due to differences in average photon path length across the profile, and is of significant concern because all lasers have some degree of natural Gaussian profile in addition to artifacts potentially caused by projecting optics. Two in vivo measurement are performed to show that flow rates differ based on location with respect to the beam profile. A quantitative analysis is then done through a speckle contrast forward model generated within a coherent Spatial Frequency Domain Imaging (cSFDI) formalism. The model predicts remitted speckle contrast as a function of spatial frequency, optical properties, and scattering dynamics. Comparison with experimental speckle contrast images were done using liquid phantoms with known optical properties for three common beam shapes. cSFDI is found to accurately predict speckle contrast for all beam shapes to within 5% root mean square error. Suggestions for improving beam homogeneity are given, including a widening of the natural beam Gaussian, proper diffusing glass spreading, and flat top shaping using microlens arrays. PMID:22741080

  10. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  11. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  12. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  13. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  14. Instrument and method for X-ray diffraction, fluorescence, and crystal texture analysis without sample preparation

    NASA Technical Reports Server (NTRS)

    Gendreau, Keith (Inventor); Martins, Jose Vanderlei (Inventor); Arzoumanian, Zaven (Inventor)

    2010-01-01

    An X-ray diffraction and X-ray fluorescence instrument for analyzing samples having no sample preparation includes a X-ray source configured to output a collimated X-ray beam comprising a continuum spectrum of X-rays to a predetermined coordinate and a photon-counting X-ray imaging spectrometer disposed to receive X-rays output from an unprepared sample disposed at the predetermined coordinate upon exposure of the unprepared sample to the collimated X-ray beam. The X-ray source and the photon-counting X-ray imaging spectrometer are arranged in a reflection geometry relative to the predetermined coordinate.

  15. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  16. Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications

    NASA Astrophysics Data System (ADS)

    Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.

    2002-08-01

    We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.

  17. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  18. Optical System Design for Noncontact, Normal Incidence, THz Imaging of in vivo Human Cornea.

    PubMed

    Sung, Shijun; Dabironezare, Shahab; Llombart, Nuria; Selvin, Skyler; Bajwa, Neha; Chantra, Somporn; Nowroozi, Bryan; Garritano, James; Goell, Jacob; Li, Alex; Deng, Sophie X; Brown, Elliott; Grundfest, Warren S; Taylor, Zachary D

    2018-01-01

    Reflection mode Terahertz (THz) imaging of corneal tissue water content (CTWC) is a proposed method for early, accurate detection and study of corneal diseases. Despite promising results from ex vivo and in vivo cornea studies, interpretation of the reflectivity data is confounded by the contact between corneal tissue and dielectric windows used to flatten the imaging field. Herein, we present an optical design for non-contact THz imaging of cornea. A beam scanning methodology performs angular, normal incidence sweeps of a focused beam over the corneal surface while keeping the source, detector, and patient stationary. A quasioptical analysis method is developed to analyze the theoretical resolution and imaging field intensity profile. These results are compared to the electric field distribution computed with a physical optics analysis code. Imaging experiments validate the optical theories behind the design and suggest that quasioptical methods are sufficient for designing of THz corneal imaging systems. Successful imaging operations support the feasibility of non-contact in vivo imaging. We believe that this optical system design will enable the first, clinically relevant, in vivo exploration of CTWC using THz technology.

  19. Error tolerance analysis of wave diagnostic based on coherent modulation imaging in high power laser system

    NASA Astrophysics Data System (ADS)

    Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-02-01

    Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.

  20. Analysis of trace fibers by IR-MALDESI imaging coupled with high resolving power MS

    PubMed Central

    Cochran, Kristin H.; Barry, Jeremy A.; Robichaud, Guillaume

    2016-01-01

    Trace evidence is a significant portion of forensic cases. Textile fibers are a common form of trace evidence that are gaining importance in criminal cases. Currently, qualitative techniques that do not yield structural information are primarily used for fiber analysis, but mass spectrometry is gaining an increasing role in this field. Mass spectrometry yields more quantitative structural information about the dye and polymer that can be used for more conclusive comparisons. Matrix-assisted laser desorption electrospray ionization (MALDESI) is a hybrid ambient ionization source being investigated for use in mass spectrometric fiber analysis. In this manuscript, IR-MALDESI was used as a source for mass spectrometry imaging (MSI) of a dyed nylon fiber cluster and single fiber. Information about the fiber polymer as well as the dye were obtained from a single fiber which was on the order of 10 μm in diameter. These experiments were performed directly from the surface of a tape lift of the fiber with a background of extraneous fibers. PMID:25081013

  1. Analysis of trace fibers by IR-MALDESI imaging coupled with high resolving power MS.

    PubMed

    Cochran, Kristin H; Barry, Jeremy A; Robichaud, Guillaume; Muddiman, David C

    2015-01-01

    Trace evidence is a significant portion of forensic cases. Textile fibers are a common form of trace evidence that are gaining importance in criminal cases. Currently, qualitative techniques that do not yield structural information are primarily used for fiber analysis, but mass spectrometry is gaining an increasing role in this field. Mass spectrometry yields more quantitative structural information about the dye and polymer that can be used for more conclusive comparisons. Matrix-assisted laser desorption electrospray ionization (MALDESI) is a hybrid ambient ionization source being investigated for use in mass spectrometric fiber analysis. In this manuscript, IR-MALDESI was used as a source for mass spectrometry imaging (MSI) of a dyed nylon fiber cluster and single fiber. Information about the fiber polymer as well as the dye were obtained from a single fiber which was on the order of 10 μm in diameter. These experiments were performed directly from the surface of a tape lift of the fiber with a background of extraneous fibers.

  2. Companions to α Orionis

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Nisenson, P.; Noyes, R. W.; Stachnik, R.

    Detection of two close optical companions to the red supergiant a Ori was accomplished using the PAPA detector for data recording, and speckle imaging for image reconstruction. Our analysis favors an interpretation in which the two optical sources are stellar companions to a Ori.The observed time dependent variations of the polarization of a Ori can be interpreted as being due to a systemic asymmetry created by one of the companions.

  3. Applications of High-speed motion analysis system on Solid Rocket Motor (SRM)

    NASA Astrophysics Data System (ADS)

    Liu, Yang; He, Guo-qiang; Li, Jiang; Liu, Pei-jin; Chen, Jian

    2007-01-01

    High-speed motion analysis system could record images up to 12,000fps and analyzed with the image processing system. The system stored data and images directly in electronic memory convenient for managing and analyzing. The high-speed motion analysis system and the X-ray radiography system were established the high-speed real-time X-ray radiography system, which could diagnose and measure the dynamic and high-speed process in opaque. The image processing software was developed for improve quality of the original image for acquiring more precise information. The typical applications of high-speed motion analysis system on solid rocket motor (SRM) were introduced in the paper. The research of anomalous combustion of solid propellant grain with defects, real-time measurement experiment of insulator eroding, explosion incision process of motor, structure and wave character of plume during the process of ignition and flameout, measurement of end burning of solid propellant, measurement of flame front and compatibility between airplane and missile during the missile launching were carried out using high-speed motion analysis system. The significative results were achieved through the research. Aim at application of high-speed motion analysis system on solid rocket motor, the key problem, such as motor vibrancy, electrical source instability, geometry aberrance, and yawp disturbance, which damaged the image quality, was solved. The image processing software was developed which improved the capability of measuring the characteristic of image. The experimental results showed that the system was a powerful facility to study instantaneous and high-speed process in solid rocket motor. With the development of the image processing technique, the capability of high-speed motion analysis system was enhanced.

  4. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  5. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  6. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  7. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  8. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  9. Smart phone: a popular device supports amylase activity assay in fisheries research.

    PubMed

    Thongprajukaew, Karun; Choodum, Aree; Sa-E, Barunee; Hayee, Ummah

    2014-11-15

    Colourimetric determinations of amylase activity were developed based on a standard dinitrosalicylic acid (DNS) staining method, using maltose as the analyte. Intensities and absorbances of red, green and blue (RGB) were obtained with iPhone imaging and Adobe Photoshop image analysis. Correlation of green and analyte concentrations was highly significant, and the accuracy of the developed method was excellent in analytical performance. The common iPhone has sufficient imaging ability for accurate quantification of maltose concentrations. Detection limits, sensitivity and linearity were comparable to a spectrophotometric method, but provided better inter-day precision. In quantifying amylase specific activity from a commercial source (P>0.02) and fish samples (P>0.05), differences compared with spectrophotometric measurements were not significant. We have demonstrated that iPhone imaging with image analysis in Adobe Photoshop has potential for field and laboratory studies of amylase. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  11. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  12. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R. M.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Kashyap, V. L.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Mossman, A. E.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2010-03-01

    The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public ACIS imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents < 30". The catalog (1) provides access to estimates of the X-ray source properties for detected sources with good scientific fidelity; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of < 1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics. In addition, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra. Support for development of the CSC is provided by the National Aeronautics and Space Administration through the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics and Space Administration under contract NAS 8-03060.

  13. HTML5 PivotViewer: high-throughput visualization and querying of image data on the web.

    PubMed

    Taylor, Stephen; Noble, Roger

    2014-09-15

    Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. © The Author 2014. Published by Oxford University Press.

  14. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  15. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  16. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  17. Image analysis of pulmonary nodules using micro CT

    NASA Astrophysics Data System (ADS)

    Niki, Noboru; Kawata, Yoshiki; Fujii, Masashi; Kakinuma, Ryutaro; Moriyama, Noriyuki; Tateno, Yukio; Matsui, Eisuke

    2001-07-01

    We are developing a micro-computed tomography (micro CT) system for imaging pulmonary nodules. The purpose is to enhance the physician performance in accessing the micro- architecture of the nodule for classification between malignant and benign nodules. The basic components of the micro CT system consist of microfocus X-ray source, a specimen manipulator, and an image intensifier detector coupled to charge-coupled device (CCD) camera. 3D image reconstruction was performed by the slice. A standard fan- beam convolution and backprojection algorithm was used to reconstruct the center plane intersecting the X-ray source. The preprocessing of the 3D image reconstruction included the correction of the geometrical distortions and the shading artifact introduced by the image intensifier. The main advantage of the system is to obtain a high spatial resolution which ranges between b micrometers and 25 micrometers . In this work we report on preliminary studies performed with the micro CT for imaging resected tissues of normal and abnormal lung. Experimental results reveal micro architecture of lung tissues, such as alveolar wall, septal wall of pulmonary lobule, and bronchiole. From the results, the micro CT system is expected to have interesting potentials for high confidential differential diagnosis.

  18. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets.

    PubMed

    Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M

    2015-10-01

    Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.

  19. Oufti: An integrated software package for high-accuracy, high-throughput quantitative microscopy analysis

    PubMed Central

    Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine

    2016-01-01

    Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279

  20. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  1. J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis

    NASA Astrophysics Data System (ADS)

    López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration

    2017-10-01

    We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.

  2. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals

    PubMed Central

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F.

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2–3 days and used 1–2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory. PMID:24904402

  3. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals.

    PubMed

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2-3 days and used 1-2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory.

  4. Phenotypic and genotypic analysis of Borrelia burgdorferi isolates from various sources.

    PubMed Central

    Adam, T; Gassmann, G S; Rasiah, C; Göbel, U B

    1991-01-01

    A total of 17 B. burgdorferi isolates from various sources were characterized by sodium dodecyl sulfate-polyacrylamide gel electrophoresis of whole-cell proteins, restriction enzyme analysis, Southern hybridization with probes complementary to unique regions of evolutionarily conserved genes (16S rRNA and fla), and direct sequencing of in vitro polymerase chain reaction-amplified fragments of the 16S rRNA gene. Three groups were distinguished on the basis of phenotypic and genotypic traits, the latter traced to the nucleotide sequence level. Images PMID:1649797

  5. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  6. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses

    NASA Astrophysics Data System (ADS)

    Bokhart, Mark T.; Nazari, Milad; Garrard, Kenneth P.; Muddiman, David C.

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. [Figure not available: see fulltext.

  7. Mass spectral analysis and imaging of tissue by ToF-SIMS--The role of buckminsterfullerene, C60+, primary ions

    NASA Astrophysics Data System (ADS)

    Jones, Emrys A.; Lockyer, Nicholas P.; Vickerman, John C.

    2007-02-01

    Recent developments in desorption/ionisation mass spectrometry techniques have made their application to biological analysis a realistic and successful proposition. Developments in primary ion source technology, mainly through the advent of polyatomic ion beams, have meant that the technique of secondary ion mass spectrometry (SIMS) can now access the depths of information required to allow biological imaging to be a viable option. Here the role of the primary ion C60+ is assessed with regard to molecular imaging of lipids and pharmaceuticals within tissue sections. High secondary ion yields and low surface damage accumulation are demonstrated on both model and real biological samples, indicating the high secondary ion efficiency afforded to the analyst by this primary ion when compared to other cluster ion beams used in imaging. The newly developed 40 keV C60+ ion source allows the beam to be focused such that high resolution imaging is demonstrated on a tissue sample, and the greater yields allow the molecular signal from the drug raclopride to be imaged within tissue section following in vivo dosing. The localisation shown for this drug alludes to issues regarding the chemical environment affecting the ionisation probability of the molecule; the importance of this effect is demonstrated with model systems and the possibility of using laser post-ionisation as a method for reducing this consequence of bio-sample complexity is demonstrated and discussed.

  8. NeuroSeg: automated cell detection and segmentation for in vivo two-photon Ca2+ imaging data.

    PubMed

    Guan, Jiangheng; Li, Jingcheng; Liang, Shanshan; Li, Ruijie; Li, Xingyi; Shi, Xiaozhe; Huang, Ciyu; Zhang, Jianxiong; Pan, Junxia; Jia, Hongbo; Zhang, Le; Chen, Xiaowei; Liao, Xiang

    2018-01-01

    Two-photon Ca 2+ imaging has become a popular approach for monitoring neuronal population activity with cellular or subcellular resolution in vivo. This approach allows for the recording of hundreds to thousands of neurons per animal and thus leads to a large amount of data to be processed. In particular, manually drawing regions of interest is the most time-consuming aspect of data analysis. However, the development of automated image analysis pipelines, which will be essential for dealing with the likely future deluge of imaging data, remains a major challenge. To address this issue, we developed NeuroSeg, an open-source MATLAB program that can facilitate the accurate and efficient segmentation of neurons in two-photon Ca 2+ imaging data. We proposed an approach using a generalized Laplacian of Gaussian filter to detect cells and weighting-based segmentation to separate individual cells from the background. We tested this approach on an in vivo two-photon Ca 2+ imaging dataset obtained from mouse cortical neurons with differently sized view fields. We show that this approach exhibits superior performance for cell detection and segmentation compared with the existing published tools. In addition, we integrated the previously reported, activity-based segmentation into our approach and found that this combined method was even more promising. The NeuroSeg software, including source code and graphical user interface, is freely available and will be a useful tool for in vivo brain activity mapping.

  9. Developing Photoacoustic Tomography Devices for Translational Medicine and Basic Science Research

    NASA Astrophysics Data System (ADS)

    Wong, Terence Tsz Wai

    Photoacoustic (PA) tomography (PAT) provides volumetric images of biological tissue with scalable spatial resolutions and imaging depths, while preserving the same imaging contrast--optical absorption. Taking the advantage of its 100% sensitivity to optical absorption, PAT has been widely applied in structural, functional, and molecular imaging, with both endogenous and exogenous contrasts, at superior depths than pure optical methods. Intuitively, hemoglobin has been the most commonly studied biomolecule in PAT due to its strong absorption in the visible wavelength regime. One of the main focuses of this dissertation is to investigate an underexplored wavelength regime--ultraviolet (UV), which allows us to image cell nuclei without labels and generate histology-like images naturally from unprocessed biological tissue. These preparation-free and easy-to-interpret characteristics open up new possibilities for PAT to become readily applicable to other important biomedical problems (e.g., surgical margin analysis, Chapter 2) or basic science studies (e.g., whole-organ imaging, Chapter 3). For instance, we developed and optimized a PA microscopy system with UV laser illumination (UV-PAM) to achieve fast, label-free, multilayered, and histology-like imaging of human breast cancer in Chapter 2. These imaging abilities are essential to intraoperative surgical margin analysis, which enables promptly directed re-excision and reduces the number of repeat surgeries. We have incorporated the Gruneisen relaxation (GR) effect with UV-PAM to improve the performance of our UV-PAM system (e.g., the axial resolution), thus providing more accurate three-dimensional (3D) information (Chapter 4). The nonlinear PA signals caused by the GR effect enable optical sectioning capability, revealing important 3D cell nuclear distributions and internal structures for cancer diagnosis. In the final focus of this dissertation, we have implemented a low-cost PA computed tomography (PACT) system with a single xenon flash lamp as the illumination source (Chapter 5). Lasers have been commonly used as illumination light sources in PACT. However, lasers are usually expensive and bulky, limiting their applicability in many clinical usages. Therefore, the use of a single xenon flash lamp as an alternative light source was explored. We found that PACT images acquired with flash lamp illumination were comparable to those acquired with laser illumination. This low-cost and portable PACT system opens up new potentials, such as low-cost skin melanoma imaging in undeveloped countries.

  10. Multiwavelength study of Chandra X-ray sources in the Antennae

    NASA Astrophysics Data System (ADS)

    Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.

    2011-01-01

    We use Wide-field InfraRed Camera (WIRC) infrared (IR) images of the Antennae (NGC 4038/4039) together with the extensive catalogue of 120 X-ray point sources to search for counterpart candidates. Using our proven frame-tie technique, we find 38 X-ray sources with IR counterparts, almost doubling the number of IR counterparts to X-ray sources that we first identified. In our photometric analysis, we consider the 35 IR counterparts that are confirmed star clusters. We show that the clusters with X-ray sources tend to be brighter, Ks≈ 16 mag, with (J-Ks) = 1.1 mag. We then use archival Hubble Space Telescope (HST) images of the Antennae to search for optical counterparts to the X-ray point sources. We employ our previous IR-to-X-ray frame-tie as an intermediary to establish a precise optical-to-X-ray frame-tie with <0.6 arcsec rms positional uncertainty. Due to the high optical source density near the X-ray sources, we determine that we cannot reliably identify counterparts. Comparing the HST positions to the 35 identified IR star cluster counterparts, we find optical matches for 27 of these sources. Using Bruzual-Charlot spectral evolutionary models, we find that most clusters associated with an X-ray source are massive, and young, ˜ 106 yr.

  11. J- AND H-BAND IMAGING OF AKARI NORTH ECLIPTIC POLE SURVEY FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Yiseul; Im, Myungshin; Kang, Eugene

    2014-10-01

    We present the J- and H-band source catalog covering the AKARI North Ecliptic Pole field. Filling the gap between the optical data from other follow-up observations and mid-infrared (MIR) data from AKARI, our near-infrared (NIR) data provides contiguous wavelength coverage from optical to MIR. For the J- and H-band imaging, we used the FLoridA Multi-object Imaging Near-ir Grism Observational Spectrometer on the Kitt Peak National Observatory 2.1m telescope covering a 5.1 deg{sup 2} area down to a 5σ depth of ∼21.6 mag and ∼21.3 mag (AB) for the J and H bands with an astrometric accuracy of 0.''14 and 0.''17more » for 1σ in R.A. and decl. directions, respectively. We detected 208,020 sources for the J band and 203,832 sources for the H band. This NIR data is being used for studies including the analysis of the physical properties of infrared sources such as stellar mass and photometric redshifts, and will be a valuable data set for various future missions.« less

  12. HerMES: ALMA Imaging of Herschel-selected Dusty Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Bussmann, R. S.; Riechers, D.; Fialkov, A.; Scudder, J.; Hayward, C. C.; Cowley, W. I.; Bock, J.; Calanog, J.; Chapman, S. C.; Cooray, A.; De Bernardis, F.; Farrah, D.; Fu, Hai; Gavazzi, R.; Hopwood, R.; Ivison, R. J.; Jarvis, M.; Lacey, C.; Loeb, A.; Oliver, S. J.; Pérez-Fournon, I.; Rigopoulou, D.; Roseboom, I. G.; Scott, Douglas; Smith, A. J.; Vieira, J. D.; Wang, L.; Wardlow, J.

    2015-10-01

    The Herschel Multi-tiered Extragalactic Survey (HerMES) has identified large numbers of dusty star-forming galaxies (DSFGs) over a wide range in redshift. A detailed understanding of these DSFGs is hampered by the limited spatial resolution of Herschel. We present 870 μm 0.″45 resolution imaging obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) of a sample of 29 HerMES DSFGs that have far-infrared (FIR) flux densities that lie between the brightest of sources found by Herschel and fainter DSFGs found via ground-based surveys in the submillimeter region. The ALMA imaging reveals that these DSFGs comprise a total of 62 sources (down to the 5σ point-source sensitivity limit in our ALMA sample; σ ≈ 0.2 {mJy}). Optical or near-infrared imaging indicates that 36 of the ALMA sources experience a significant flux boost from gravitational lensing (μ \\gt 1.1), but only six are strongly lensed and show multiple images. We introduce and make use of uvmcmcfit, a general-purpose and publicly available Markov chain Monte Carlo visibility-plane analysis tool to analyze the source properties. Combined with our previous work on brighter Herschel sources, the lens models presented here tentatively favor intrinsic number counts for DSFGs with a break near 8 {mJy} at 880 μ {{m}} and a steep fall-off at higher flux densities. Nearly 70% of the Herschel sources break down into multiple ALMA counterparts, consistent with previous research indicating that the multiplicity rate is high in bright sources discovered in single-dish submillimeter or FIR surveys. The ALMA counterparts to our Herschel targets are located significantly closer to each other than ALMA counterparts to sources found in the LABOCA ECDFS Submillimeter Survey. Theoretical models underpredict the excess number of sources with small separations seen in our ALMA sample. The high multiplicity rate and small projected separations between sources seen in our sample argue in favor of interactions and mergers plausibly driving both the prodigious emission from the brightest DSFGs as well as the sharp downturn above {S}880=8 {mJy}. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  13. Radiograph and passive data analysis using mixed variable optimization

    DOEpatents

    Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.

    2015-06-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.

  14. Imaging System and Method for Biomedical Analysis

    DTIC Science & Technology

    2013-03-11

    biological particles and items of interest. Broadly, Padmanabhan et al. utilize the diffraction of a laser light source in flow cytometry to count...spread of light from multiple LED devices over the entire sample surface. Preferably, light source 308 projects a full spectrum white light. Light...for example, red blood cells, white blood cells (which may include lymphocytes which are relatively large and easily detectable), T-helper cells

  15. Accommodating multiple illumination sources in an imaging colorimetry environment

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.

    2000-03-01

    Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.

  16. AtomicJ: An open source software for analysis of force curves

    NASA Astrophysics Data System (ADS)

    Hermanowicz, Paweł; Sarna, Michał; Burda, Kvetoslava; Gabryś, Halina

    2014-06-01

    We present an open source Java application for analysis of force curves and images recorded with the Atomic Force Microscope. AtomicJ supports a wide range of contact mechanics models and implements procedures that reduce the influence of deviations from the contact model. It generates maps of mechanical properties, including maps of Young's modulus, adhesion force, and sample height. It can also calculate stacks, which reveal how sample's response to deformation changes with indentation depth. AtomicJ analyzes force curves concurrently on multiple threads, which allows for high speed of analysis. It runs on all popular operating systems, including Windows, Linux, and Macintosh.

  17. Improvement in the clinical practicability of roentgen stereophotogrammetric analysis (RSA): free from the use of the dual X-ray equipment.

    PubMed

    Shih, Kao-Shang; Lee, Chian-Her; Syu, Ci-Bin; Lai, Jiing-Yih; Chen, Kuo-Jen; Lin, Shang-Chih

    2012-10-01

    After total knee replacement, the monitoring of the prosthetic performance is often done by roentgenographic examination. However, the two-dimensional (2D) roentgen images only provide information about the projection onto the anteroposterior (AP) and mediolateral (ML) planes. Historically, the model-based roentgen stereophotogrammetric analysis (RSA) technique has been developed to predict the spatial relationship between prostheses by iteratively comparing the projective data for the prosthetic models and the roentgen images. During examination, the prosthetic poses should be stationary. This should be ensured, either by the use of dual synchronized X-ray equipment or by the use of a specific posture. In practice, these methods are uncommon or technically inconvenient during follow-up examination. This study aims to develop a rotation platform to improve the clinical applicability of the model-based RSA technique. The rotation platform allows the patient to assume a weight-bearing posture, while being steadily rotated so that both AP and ML knee images can be obtained. This study uses X-ray equipment with a single source and flat panel detectors (FPDs). Four tests are conducted to evaluate the quality of the FPD images, steadiness of the rotation platform, and accuracy of the RSA results. The results show that the distortion-induced error of the FPD image is quite minor, and the prosthetic size can be cautiously calibrated by means of the scale ball(s). The rotation platform should be placed closer to the FPD and orthogonal to the projection axis of the X-ray source. Image overlap of the prostheses can be avoided by adjusting both X-ray source and knee posture. The device-induced problems associated with the rotation platform include the steadiness of the platform operation and the balance of the rotated subject. Sawbone tests demonstrate that the outline error, due to the platform, is of the order of the image resolution (= 0.145 mm). In conclusion, the rotation platform with steady rotation, a knee support, and a handle can serve as an alternative method to take prosthetic images, without the loss in accuracy associated with the RSA method.

  18. Imaging of the human choroid with a 1.7 MHz A-scan rate FDML swept source OCT system

    NASA Astrophysics Data System (ADS)

    Gorczynska, I.; Migacz, J. V.; Jonnal, R.; Zawadzki, R. J.; Poddar, R.; Werner, J. S.

    2017-02-01

    We demonstrate OCT angiography (OCTA) and Doppler OCT imaging of the choroid in the eyes of two healthy volunteers and in a geographic atrophy case. We show that visualization of specific choroidal layers requires selection of appropriate OCTA methods. We investigate how imaging speed, B-scan averaging and scanning density influence visualization of various choroidal vessels. We introduce spatial power spectrum analysis of OCT en face angiographic projections as a method of quantitative analysis of choroicapillaris morphology. We explore the possibility of Doppler OCT imaging to provide information about directionality of blood flow in choroidal vessels. To achieve these goals, we have developed OCT systems utilizing an FDML laser operating at 1.7 MHz sweep rate, at 1060 nm center wavelength, and with 7.5 μm axial imaging resolution. A correlation mapping OCA method was implemented for visualization of the vessels. Joint Spectral and Time domain OCT (STdOCT) technique was used for Doppler OCT imaging.

  19. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  20. Analysis of gene expression levels in individual bacterial cells without image segmentation.

    PubMed

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J

    2012-05-11

    Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Group Analysis in FieldTrip of Time-Frequency Responses: A Pipeline for Reproducibility at Every Step of Processing, Going From Individual Sensor Space Representations to an Across-Group Source Space Representation.

    PubMed

    Andersen, Lau M

    2018-01-01

    An important aim of an analysis pipeline for magnetoencephalographic (MEG) data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer his or her questions. The example question being answered here is whether the so-called beta rebound differs between novel and repeated stimulations. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The data analyzed are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. The processing steps covered for the first analysis are MaxFiltering the raw data, defining, preprocessing and epoching the data, cleaning the data, finding and removing independent components related to eye blinks, eye movements and heart beats, calculating participants' individual evoked responses by averaging over epoched data and subsequently removing the average response from single epochs, calculating a time-frequency representation and baselining it with non-stimulation trials and finally calculating a grand average, an across-group sensor space representation. The second analysis starts from the grand average sensor space representation and after identification of the beta rebound the neural origin is imaged using beamformer source reconstruction. This analysis covers reading in co-registered magnetic resonance images, segmenting the data, creating a volume conductor, creating a forward model, cutting out MEG data of interest in the time and frequency domains, getting Fourier transforms and estimating source activity with a beamformer model where power is expressed relative to MEG data measured during periods of non-stimulation. Finally, morphing the source estimates onto a common template and performing group-level statistics on the data are covered. Functions for saving relevant figures in an automated and structured manner are also included. The protocol presented here can be applied to any research protocol where the emphasis is on source reconstruction of induced responses where the underlying sources are not coherent.

  2. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  3. Nested Focusing Optics for Compact Neutron Sources

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2015-01-01

    NASA's Marshall Space Flight Center, the Massachusetts Institute of Technology (MIT), and the University of Alabama Huntsville (UAH) have developed novel neutron grazing incidence optics for use with small-scale portable neutron generators. The technology was developed to enable the use of commercially available neutron generators for applications requiring high flux densities, including high performance imaging and analysis. Nested grazing incidence mirror optics, with high collection efficiency, are used to produce divergent, parallel, or convergent neutron beams. Ray tracing simulations of the system (with source-object separation of 10m for 5 meV neutrons) show nearly an order of magnitude neutron flux increase on a 1-mm diameter object. The technology is a result of joint development efforts between NASA and MIT researchers seeking to maximize neutron flux from diffuse sources for imaging and testing applications.

  4. Image analysis for quantification of bacterial rock weathering.

    PubMed

    Puente, M Esther; Rodriguez-Jaramillo, M Carmen; Li, Ching Y; Bashan, Yoav

    2006-02-01

    A fast, quantitative image analysis technique was developed to assess potential rock weathering by bacteria. The technique is based on reduction in the surface area of rock particles and counting the relative increase in the number of small particles in ground rock slurries. This was done by recording changes in ground rock samples with an electronic image analyzing process. The slurries were previously amended with three carbon sources, ground to a uniform particle size and incubated with rock weathering bacteria for 28 days. The technique was developed and tested, using two rock-weathering bacteria Pseudomonas putida R-20 and Azospirillum brasilense Cd on marble, granite, apatite, quartz, limestone, and volcanic rock as substrates. The image analyzer processed large number of particles (10(7)-10(8) per sample), so that the weathering capacity of bacteria can be detected.

  5. [Watching dance of the molecules - CARS microscopy].

    PubMed

    Korczyński, Jaroslaw; Kubiak, Katarzyna; Węgłowska, Edyta

    2017-01-01

    CARS (Coherent Anti-Stokes Raman Scattering) microscopy is an imaging method for living cells visualization as well as for food or cosmetics material analysis without the need for staining. The near infrared laser source generates the CARS signal - the characteristic intrinsic vibrational contrast of the molecules in a sample which is no longer caused by staining, but by the molecules themselves. It provides the benefit of a non-toxic, non-destructive and almost noninvasive method for sample imaging. CARS can easily be combined with fluorescence confocal microscopy so it is an excellent complementary imaging method. In this article we showed some of the applications for this technology: imaging of lipid droplets inside human HaCaT cells and analysis of the composition of cosmetic products. Moreover we believe, that soon new fields of application become accessible for this rapidly developing branch of microscopy.

  6. Wide-field direct CCD observations supporting the Astro-1 Space Shuttle mission's Ultraviolet Imaging Telescope

    NASA Technical Reports Server (NTRS)

    Hintzen, Paul; Angione, Ron; Talbert, Freddie; Cheng, K.-P.; Smith, Eric; Stecher, Theodore P.

    1993-01-01

    Wide field direct CCD observations are being obtained to support and complement the vacuum-ultraviolet (VUV) images provided by Astro's Ultraviolet Imaging Telescope (UIT) during a Space Shuttle flight in December 1990. Because of the wide variety of projects addressed by UIT, the fields observed include (1) galactic supernova remnants such as the Cygnus Loop and globular clusters such as Omega Cen and M79; (2) the Magellanic Clouds, M33, M81, and other galaxies in the Local Group; and (3) rich clusters of galaxies, principally the Perseus cluster and Abell 1367. Ground-based observations have been obtained for virtually all of the Astro-1 UIT fields. The optical images allow identification of individual UV sources in each field and provide the long baseline in wavelength necessary for accurate analysis of UV-bright sources. To facilitate use of our optical images for analysis of UIT data and other projects, we plan to archive them, with the UIT images, at the National Space Science Data Center (NSSDC), where they will be universally accessible via anonymous FTP. The UIT, one of three telescopes comprising the Astro spacecraft, is a 38-cm f/9 Ritchey-Chretien telescope on which high quantum efficiency, solar-blind image tubes are used to record VUV images on photographic film. Five filters with passbands centered between 1250A and 2500A provide both VUV colors and a measurement of extinction via the 2200A dust feature. The resulting calibrated VUV pictures are 40 arcminutes in diameter at 2.5 arcseconds resolution. The capabilities of UIT, therefore, complement HST's WFPC: the latter has 40 times greater collecting area, while UIT's usable field has 170 times WFPC's field area.

  7. New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.

    2013-02-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  8. Multi-Source Image Analysis.

    DTIC Science & Technology

    1979-12-01

    vegetation shows on the imagery but emphasis has been placed on the detection of wooded and scrub areas and the differentiation between deciduous and...S. A., 1974b, Phenology and remote sensing, phenology and seasonality modeling: in Helmut Lieth, H. (ed.), Ecological Studies-Analysis and Synthesis...Remote Sensing of Ecology , University of d-eorgia Press, Athens, Georgia, p. 63-94. Phillipson, W. R. and T. Liang, 1975, Airphoto analysis in the

  9. The Chandra Source Catalog: Source Properties and Data Products

    NASA Astrophysics Data System (ADS)

    Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).

  10. Scoping Study of Machine Learning Techniques for Visualization and Analysis of Multi-source Data in Nuclear Safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang

    In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less

  11. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  12. Multicriteria analysis for sources of renewable energy using data from remote sensing

    NASA Astrophysics Data System (ADS)

    Matejicek, L.

    2015-04-01

    Renewable energy sources are major components of the strategy to reduce harmful emissions and to replace depleting fossil energy resources. Data from remote sensing can provide information for multicriteria analysis for sources of renewable energy. Advanced land cover quantification makes it possible to search for suitable sites. Multicriteria analysis, together with other data, is used to determine the energy potential and socially acceptability of suggested locations. The described case study is focused on an area of surface coal mines in the northwestern region of the Czech Republic, where the impacts of surface mining and reclamation constitute a dominant force in land cover changes. High resolution satellite images represent the main input datasets for identification of suitable sites. Solar mapping, wind predictions, the location of weirs in watersheds, road maps and demographic information complement the data from remote sensing for multicriteria analysis, which is implemented in a geographic information system (GIS). The input spatial datasets for multicriteria analysis in GIS are reclassified to a common scale and processed with raster algebra tools to identify suitable sites for sources of renewable energy. The selection of suitable sites is limited by the CORINE land cover database to mining and agricultural areas. The case study is focused on long term land cover changes in the 1985-2015 period. Multicriteria analysis based on CORINE data shows moderate changes in mapping of suitable sites for utilization of selected sources of renewable energy in 1990, 2000, 2006 and 2012. The results represent map layers showing the energy potential on a scale of a few preference classes (1-7), where the first class is linked to minimum preference and the last class to maximum preference. The attached histograms show the moderate variability of preference classes due to land cover changes caused by mining activities. The results also show a slight increase in the more preferred classes for utilization of sources of renewable energy due to an increase area of reclaimed sites. Using data from remote sensing, such as the multispectral images and the CORINE land cover datasets, can reduce the financial resources currently required for finding and assessing suitable areas.

  13. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  14. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    PubMed Central

    Choudhry, Priya

    2016-01-01

    Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849

  15. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.

    PubMed

    Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin

    2018-02-09

    Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.

  17. Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.

    PubMed

    Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C

    2017-06-01

    An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.

  18. Evaluating the purity of a {sup 57}Co flood source by PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiFilippo, Frank P., E-mail: difilif@ccf.org

    2014-11-01

    Purpose: Flood sources of {sup 57}Co are commonly used for quality control of gamma cameras. Flood uniformity may be affected by the contaminants {sup 56}Co and {sup 58}Co, which emit higher energy photons. Although vendors specify a maximum combined {sup 56}Co and {sup 58}Co activity, a convenient test for flood source purity that is feasible in a clinical environment would be desirable. Methods: Both {sup 56}Co and {sup 58}Co emit positrons with branching 19.6% and 14.9%, respectively. As is known from {sup 90}Y imaging, a positron emission tomography (PET) scanner is capable of quantitatively imaging very weak positron emission inmore » a high single-photon background. To evaluate this approach, two {sup 57}Co flood sources were scanned with a clinical PET/CT multiple times over a period of months. The {sup 56}Co and {sup 58}Co activity was clearly visible in the reconstructed PET images. Total impurity activity was quantified from the PET images after background subtraction of prompt gamma coincidences. Results: Time-of-flight PET reconstruction was highly beneficial for accurate image quantification. Repeated measurements of the positron-emitting impurities showed excellent agreement with an exponential decay model. For both flood sources studied, the fit parameters indicated a zero intercept and a decay half-life consistent with a mixture of {sup 56}Co and {sup 58}Co. The total impurity activity at the reference date was estimated to be 0.06% and 0.07% for the two sources, which was consistent with the vendor’s specification of <0.12%. Conclusions: The robustness of the repeated measurements and a thorough analysis of the detector corrections and physics suggest that the accuracy is acceptable and that the technique is feasible. Further work is needed to validate the accuracy of this technique with a calibrated high resolution gamma spectrometer as a gold standard, which was not available for this study, and for other PET detector models.« less

  19. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  20. Study of atmospheric diffusion using LANDSAT

    NASA Technical Reports Server (NTRS)

    Torsani, J. A.; Viswanadham, Y.

    1982-01-01

    The parameters of diffusion patterns of atmospheric pollutants under different conditions were investigated for use in the Gaussian model for calculation of pollution concentration. Value for the divergence pattern of concentration distribution along the Y axis were determined using LANDSAT images. Multispectral scanner images of a point source plume having known characteristics, wind and temperature data, and cloud cover and solar elevation data provided by LANDSAT, were analyzed using the 1-100 system for image analysis. These measured values are compared with pollution transport as predicted by the Pasquill-Gifford, Juelich, and Hoegstroem atmospheric models.

  1. RELICS: Strong-lensing Analysis of the Massive Clusters MACS J0308.9+2645 and PLCK G171.9‑40.7

    NASA Astrophysics Data System (ADS)

    Acebron, Ana; Cibirka, Nathália; Zitrin, Adi; Coe, Dan; Agulli, Irene; Sharon, Keren; Bradač, Maruša; Frye, Brenda; Livermore, Rachael C.; Mahler, Guillaume; Salmon, Brett; Umetsu, Keiichi; Bradley, Larry; Andrade-Santos, Felipe; Avila, Roberto; Carrasco, Daniela; Cerny, Catherine; Czakon, Nicole G.; Dawson, William A.; Hoag, Austin T.; Huang, Kuang-Han; Johnson, Traci L.; Jones, Christine; Kikuchihara, Shotaro; Lam, Daniel; Lovisari, Lorenzo; Mainali, Ramesh; Oesch, Pascal A.; Ogaz, Sara; Ouchi, Masami; Past, Matthew; Paterno-Mahler, Rachel; Peterson, Avery; Ryan, Russell E.; Sendra-Server, Irene; Stark, Daniel P.; Strait, Victoria; Toft, Sune; Trenti, Michele; Vulcani, Benedetta

    2018-05-01

    Strong gravitational lensing by galaxy clusters has become a powerful tool for probing the high-redshift universe, magnifying distant and faint background galaxies. Reliable strong-lensing (SL) models are crucial for determining the intrinsic properties of distant, magnified sources and for constructing their luminosity function. We present here the first SL analysis of MACS J0308.9+2645 and PLCK G171.9‑40.7, two massive galaxy clusters imaged with the Hubble Space Telescope, in the framework of the Reionization Lensing Cluster Survey (RELICS). We use the light-traces-mass modeling technique to uncover sets of multiply imaged galaxies and constrain the mass distribution of the clusters. Our SL analysis reveals that both clusters have particularly large Einstein radii (θ E > 30″ for a source redshift of z s = 2), providing fairly large areas with high magnifications, useful for high-redshift galaxy searches (∼2 arcmin2 with μ > 5 to ∼1 arcmin2 with μ > 10, similar to a typical Hubble Frontier Fields cluster). We also find that MACS J0308.9+2645 hosts a promising, apparently bright (J ∼ 23.2–24.6 AB), multiply imaged high-redshift candidate at z ∼ 6.4. These images are among the brightest high-redshift candidates found in RELICS. Our mass models, including magnification maps, are made publicly available for the community through the Mikulski Archive for Space Telescopes.

  2. Ultraluminous X-ray Sources in NGC 6946.

    NASA Astrophysics Data System (ADS)

    Sánchez Cruces, Mónica; Rosado, Margarita; Fuentes-Carrera, Isaura L.

    2016-07-01

    Ultra-luminous X-ray sources (ULXs) are the most X-ray luminous off-nucleus objects in nearby galaxies with X-ray luminosities between 10^{39} - 10^{41} erg s^{-1} in the 0.5-10 keV band. Since these luminosities cannot be explained by the standard accretion of a stellar mass black hole, these sources are often associated with intermediate-mass black holes (IMBHs, 10^{2}-10^{4} solar masses). However significantly beamed stellar binary systems could also explain these luminosities. Observational knowledge of the angular distribution of the source emission is essential to decide between these two scenarios. In this work, we present the X-ray analysis of five ULXs in the spiral galaxy NGC 6949, along with the kinematical analysis of the ionized gas surrounding each of these sources. For all sources, X-ray observations reveal a typical ULX spectral shape (with a soft excess below 2 keV and a hard curvature above 2 keV) which can be fit with a power-law + multi-color disk model. However, even if ULXs are classified as point-like objects, one of the sources in this galaxy displays an elongated shape in the Chandra images. Regarding the analysis of the emission lines of the surrounding ˜300 pc around each ULX, scanning Fabry-Perot observations show composite profiles for three of the five ULXs. The main component of these profiles follows the global rotation of the galaxy, while the faint secondary component seems to be associated with asymmetrical gas expansion. These sources have also been located in archive images of NGC 6946 in different wavelengths in order to relate them to different physical processes occurring in this galaxy. Though ULXs are usually located in star formation regions, we find that two of the sources lie a few tenths of parsecs away from different HII regions. Based on the X-ray morphology of each ULX, the velocities and distribution of the surrounding gas, as well as the location of the source in the context of the whole galaxy, we give the most favorable scenario in each case in order to describe the multiwavelength properties of these sources.

  3. Investigating the Origins of Two Extreme Solar Particle Events: Proton Source Profile and Associated Electromagnetic Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocharov, Leon; Usoskin, Ilya; Pohjolainen, Silja

    We analyze the high-energy particle emission from the Sun in two extreme solar particle events in which protons are accelerated to relativistic energies and can cause a significant signal even in the ground-based particle detectors. Analysis of a relativistic proton event is based on modeling of the particle transport and interaction, from a near-Sun source through the solar wind and the Earth’s magnetosphere and atmosphere to a detector on the ground. This allows us to deduce the time profile of the proton source at the Sun and compare it with observed electromagnetic emissions. The 1998 May 2 event is associatedmore » with a flare and a coronal mass ejection (CME), which were well observed by the Nançay Radioheliograph, thus the images of the radio sources are available. For the 2003 November 2 event, the low corona images of the CME liftoff obtained at the Mauna Loa Solar Observatory are available. Those complementary data sets are analyzed jointly with the broadband dynamic radio spectra, EUV images, and other data available for both events. We find a common scenario for both eruptions, including the flare’s dual impulsive phase, the CME-launch-associated decimetric-continuum burst, and the late, low-frequency type III radio bursts at the time of the relativistic proton injection into the interplanetary medium. The analysis supports the idea that the two considered events start with emission of relativistic protons previously accelerated during the flare and CME launch, then trapped in large-scale magnetic loops and later released by the expanding CME.« less

  4. MULTI-SOURCE FEATURE LEARNING FOR JOINT ANALYSIS OF INCOMPLETE MULTIPLE HETEROGENEOUS NEUROIMAGING DATA

    PubMed Central

    Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping

    2012-01-01

    Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655

  5. Alternative light source (polilight) illumination with digital image analysis does not assist in determining the age of bruises.

    PubMed

    Hughes, V K; Ellis, P S; Langlois, N E I

    2006-05-10

    The age of a bruise may be of interest to forensic investigators. Previous research has demonstrated that an alternative light source may assist in the visualisation of faint or non-visible bruises. This project aimed to determine if an alternative light source could be utilised to assist investigators estimate the age of a bruise. Forty braises, sustained from blunt force trauma, were examined from 30 healthy subjects. The age of the bruises ranged from 2 to 231 h (mean = 74.6, median = 69.0). Alternative light source (polilight) illumination at 415 and 450 nm was used. The black and white photographs obtained were assessed using densitometry. A statistical analysis indicated that there was no correlation between time and the mean densitometry values. The alternative light source used in this study was unable to assist in determining the age of a bruise.

  6. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    PubMed

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  7. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  8. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    NASA Astrophysics Data System (ADS)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  9. NiftyNet: a deep-learning platform for medical imaging.

    PubMed

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  10. IMPULSIVE PHASE CORONAL HARD X-RAY SOURCES IN AN X3.9 CLASS SOLAR FLARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Qingrong; Petrosian, Vahe, E-mail: qrchen@gmail.com, E-mail: vahep@stanford.edu

    2012-03-20

    We present the analysis of a pair of unusually energetic coronal hard X-ray (HXR) sources detected by the Reuven Ramaty High Energy Solar Spectroscopic Imager during the impulsive phase of an X3.9 class solar flare on 2003 November 3, which simultaneously shows two intense footpoint (FP) sources. A distinct loop top (LT) coronal source is detected up to {approx}150 keV and a second (upper) coronal source up to {approx}80 keV. These photon energies, which were not fully investigated in earlier analysis of this flare, are much higher than commonly observed in coronal sources and pose grave modeling challenges. The LTmore » source in general appears higher in altitude with increasing energy and exhibits a more limited motion compared to the expansion of the thermal loop. The high-energy LT source shows an impulsive time profile and its nonthermal power-law spectrum exhibits soft-hard-soft evolution during the impulsive phase, similar to the FP sources. The upper coronal source exhibits an opposite spatial gradient and a similar spectral slope compared to the LT source. These properties are consistent with the model of stochastic acceleration of electrons by plasma waves or turbulence. However, the LT and FP spectral index difference (varying from {approx}0 to 1) is much smaller than commonly measured and than that expected from a simple stochastic acceleration model. Additional confinement or trapping mechanisms of high-energy electrons in the corona are required. Comprehensive modeling including both kinetic effects and the macroscopic flare structure may shed light on this behavior. These results highlight the importance of imaging spectroscopic observations of the LT and FP sources up to high energies in understanding electron acceleration in solar flares. Finally, we show that the electrons producing the upper coronal HXR source may very likely be responsible for the type III radio bursts at the decimetric/metric wavelength observed during the impulsive phase of this flare.« less

  11. Multifit / Polydefix : a framework for the analysis of polycrystal deformation using X-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkel, Sébastien; Hilairet, Nadège

    2015-06-27

    Multifit/Polydefixis an open source IDL software package for the efficient processing of diffraction data obtained in deformation apparatuses at synchrotron beamlines.Multifitallows users to decompose two-dimensional diffraction images into azimuthal slices, fit peak positions, shapes and intensities, and propagate the results to other azimuths and images.Polydefixis for analysis of deformation experiments. Starting from output files created inMultifitor other packages, it will extract elastic lattice strains, evaluate sample pressure and differential stress, and prepare input files for further texture analysis. TheMultifit/Polydefixpackage is designed to make the tedious data analysis of synchrotron-based plasticity, rheology or other time-dependent experiments very straightforward and accessible tomore » a wider community.« less

  12. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  13. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  15. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.

  16. Reliability of MEG source imaging of anterior temporal spikes: analysis of an intracranially characterized spike focus.

    PubMed

    Wennberg, Richard; Cheyne, Douglas

    2014-05-01

    To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Hyperspectral Fluorescence and Reflectance Imaging Instrument

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey

    2008-01-01

    The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.

  18. Testing a high-power LED based light source for hyperspectral imaging microscopy

    NASA Astrophysics Data System (ADS)

    Klomkaew, Phiwat; Mayes, Sam A.; Rich, Thomas C.; Leavesley, Silas J.

    2017-02-01

    Our lab has worked to develop high-speed hyperspectral imaging systems that scan the fluorescence excitation spectrum for biomedical imaging applications. Hyperspectral imaging can be used in remote sensing, medical imaging, reaction analysis, and other applications. Here, we describe the development of a hyperspectral imaging system that comprised an inverted Nikon Eclipse microscope, sCMOS camera, and a custom light source that utilized a series of high-power LEDs. LED selection was performed to achieve wavelengths of 350-590 nm. To reduce scattering, LEDs with low viewing angles were selected. LEDs were surface-mount soldered and powered by an RCD. We utilized 3D printed mounting brackets to assemble all circuit components. Spectraradiometric calibration was performed using a spectrometer (QE65000, Ocean Optics) and integrating sphere (FOIS-1, Ocean Optics). Optical output and LED driving current were measured over a range of illumination intensities. A normalization algorithm was used to calibrate and optimize the intensity of the light source. The highest illumination power was at 375 nm (3300 mW/cm2), while the lowest illumination power was at 515, 525, and 590 nm (5200 mW/cm2). Comparing the intensities supplied by each LED to the intensities measured at the microscope stage, we found there was a great loss in power output. Future work will focus on using two of the same LEDs to double the power and finding more LED and/or laser diodes and chips around the range. This custom hyperspectral imaging system could be used for the detection of cancer and the identification of biomolecules.

  19. Ultrafast Synthetic Transmit Aperture Imaging Using Hadamard-Encoded Virtual Sources With Overlapping Sub-Apertures.

    PubMed

    Ping Gong; Pengfei Song; Shigao Chen

    2017-06-01

    The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.

  20. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  1. Brainstorm: A User-Friendly Application for MEG/EEG Analysis

    PubMed Central

    Tadel, François; Baillet, Sylvain; Mosher, John C.; Pantazis, Dimitrios; Leahy, Richard M.

    2011-01-01

    Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI). PMID:21584256

  2. ImTK: an open source multi-center information management toolkit

    NASA Astrophysics Data System (ADS)

    Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.

    2008-03-01

    The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.

  3. Characterization of groups using composite kernels and multi-source fMRI analysis data: application to schizophrenia

    PubMed Central

    Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.

    2011-01-01

    Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948

  4. Super-contrast photoacoustic resonance imaging

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Zhang, Ruochong; Feng, Xiaohua; Liu, Siyu; Zheng, Yuanjin

    2018-02-01

    In this paper, a new imaging modality, named photoacoustic resonance imaging (PARI), is proposed and experimentally demonstrated. Being distinct from conventional single nanosecond laser pulse induced wideband PA signal, the proposed PARI method utilizes multi-burst modulated laser source to induce PA resonant signal with enhanced signal strength and narrower bandwidth. Moreover, imaging contrast could be clearly improved than conventional single-pulse laser based PA imaging by selecting optimum modulation frequency of the laser source, which originates from physical properties of different materials beyond the optical absorption coefficient. Specifically, the imaging steps is as follows: 1: Perform conventional PA imaging by modulating the laser source as a short pulse to identify the location of the target and the background. 2: Shine modulated laser beam on the background and target respectively to characterize their individual resonance frequency by sweeping the modulation frequency of the CW laser source. 3: Select the resonance frequency of the target as the modulation frequency of the laser source, perform imaging and get the first PARI image. Then choose the resonance frequency of the background as the modulation frequency of the laser source, perform imaging and get the second PARI image. 4: subtract the first PARI image from the second PARI image, then we get the contrast-enhanced PARI results over the conventional PA imaging in step 1. Experimental validation on phantoms have been performed to show the merits of the proposed PARI method with much improved image contrast.

  5. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  6. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  7. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  8. Automated object-based classification of rain-induced landslides with VHR multispectral images in Madeira Island

    NASA Astrophysics Data System (ADS)

    Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.

    2015-09-01

    A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.

  9. THz-wave parametric sources and imaging applications

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo

    2004-12-01

    We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  10. Methods for the analysis of ordinal response data in medical image quality assessment.

    PubMed

    Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G

    2016-07-01

    The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.

  11. CellStress - open source image analysis program for single-cell analysis

    NASA Astrophysics Data System (ADS)

    Smedh, Maria; Beck, Caroline; Sott, Kristin; Goksör, Mattias

    2010-08-01

    This work describes our image-analysis software, CellStress, which has been developed in Matlab and is issued under a GPL license. CellStress was developed in order to analyze migration of fluorescent proteins inside single cells during changing environmental conditions. CellStress can also be used to score information regarding protein aggregation in single cells over time, which is especially useful when monitoring cell signaling pathways involved in e.g. Alzheimer's or Huntington's disease. Parallel single-cell analysis of large numbers of cells is an important part of the research conducted in systems biology and quantitative biology in order to mathematically describe cellular processes. To quantify properties for single cells, large amounts of data acquired during extended time periods are needed. Manual analyses of such data involve huge efforts and could also include a bias, which complicates the use and comparison of data for further simulations or modeling. Therefore, it is necessary to have an automated and unbiased image analysis procedure, which is the aim of CellStress. CellStress utilizes cell contours detected by CellStat (developed at Fraunhofer-Chalmers Centre), which identifies cell boundaries using bright field images, and thus reduces the fluorescent labeling needed.

  12. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  13. Two X-ray pulsars: 2S 1145-619 and 1E 1145.1-6141. [indentified with imaging proportional counter

    NASA Technical Reports Server (NTRS)

    Lamb, R. C.; Markert, T. H.; Hartman, R. C.; Thompson, D. J.; Bignami, G. F.

    1980-01-01

    Observations from the Einstein observatory reveal a previously unreported source, 1e1145.1-6141, within 15 arcmin of 2s1145-619 and of comparable intensity during July 1979. Periodicity analysis of the data shows a 290 + or - 2s period for the 2s source and a 298 + or - 4s period for the lE source, confirming the previous Ariel V report of two periods in this range from this region of the sky.

  14. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  15. A multi-phenotypic imaging screen to identify bacterial effectors by exogenous expression in a HeLa cell line.

    PubMed

    Collins, Adam; Huett, Alan

    2018-05-15

    We present a high-content screen (HCS) for the simultaneous analysis of multiple phenotypes in HeLa cells expressing an autophagy reporter (mcherry-LC3) and one of 224 GFP-fused proteins from the Crohn's Disease (CD)-associated bacterium, Adherent Invasive E. coli (AIEC) strain LF82. Using automated confocal microscopy and image analysis (CellProfiler), we localised GFP fusions within cells, and monitored their effects upon autophagy (an important innate cellular defence mechanism), cellular and nuclear morphology, and the actin cytoskeleton. This data will provide an atlas for the localisation of 224 AIEC proteins within human cells, as well as a dataset to analyse their effects upon many aspects of host cell morphology. We also describe an open-source, automated, image-analysis workflow to identify bacterial effectors and their roles via the perturbations induced in reporter cell lines when candidate effectors are exogenously expressed.

  16. TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.

    PubMed

    Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D

    2018-05-08

    Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.

  17. "Proximal Sensing" capabilities for snow cover monitoring

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo

    2013-04-01

    The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.

  18. Seismic reflection imaging with conventional and unconventional sources

    NASA Astrophysics Data System (ADS)

    Quiros Ugalde, Diego Alonso

    This manuscript reports the results of research using both conventional and unconventional energy sources as well as conventional and unconventional analysis to image crustal structure using reflected seismic waves. The work presented here includes the use of explosions to investigate the Taiwanese lithosphere, the use of 'noise' from railroads to investigate the shallow subsurface of the Rio Grande rift, and the use of microearthquakes to image subsurface structure near an active fault zone within the Appalachian mountains. Chapter 1 uses recordings from the land refraction and wide-angle reflection component of the Taiwan Integrated Geodynamic Research (TAIGER) project. The most prominent reflection feature imaged by these surveys is an anomalously strong reflector found in northeastern Taiwan. The goal of this chapter is to analyze the TAIGER recordings and to place the reflector into a geologic framework that fits with the modern tectonic kinematics of the region. Chapter 2 uses railroad traffic as a source for reflection profiling within the Rio Grande rift. Here the railroad recordings are treated in an analogous way to Vibroseis recordings. These results suggest that railroad noise in general can be a valuable new tool in imaging and characterizing the shallow subsurface in environmental and geotechnical studies. In chapters 3 and 4, earthquakes serve as the seismic imaging source. In these studies the methodology of Vertical Seismic Profiling (VSP) is borrowed from the oil and gas industry to develop reflection images. In chapter 3, a single earthquake is used to probe a small area beneath Waterboro, Maine. In chapter 4, the same method is applied to multiple earthquakes to take advantage of the increased redundancy that results from multiple events illuminating the same structure. The latter study demonstrates how dense arrays can be a powerful new tool for delineating, and monitoring temporal changes of deep structure in areas characterized by significant seismic activity.

  19. Validation of Optical Coherence Tomography against Micro-computed Tomography for Evaluation of Remaining Coronal Dentin Thickness.

    PubMed

    Majkut, Patrycja; Sadr, Alireza; Shimada, Yasushi; Sumi, Yasunori; Tagami, Junji

    2015-08-01

    Optical coherence tomography (OCT) is a noninvasive modality to obtain in-depth images of biological structures. A dental OCT system has become available for chairside application. This in vitro study hypothesized that swept-source OCT can be used to measure the remaining dentin thickness (RDT) at the roof of the dental pulp chamber during excavation of deep caries. Human molar teeth with deep occlusal caries were investigated. After obtaining 2-dimensional and 3-dimensional OCT scans using a swept-source OCT system at a 1330-nm center wavelength, RDT was evaluated by image analysis software. Microfocus x-ray computed tomographic (micro-CT) images were obtained from the same cross sections to confirm OCT findings. The smallest RDT values at the visible pulp horn were measured on OCT and micro-CT imaging and compared using the Pearson correlation. Pulpal horns and pulp chamber roof observation under OCT and micro-CT imaging resulted in comparable images that allowed the measurement of coronal dentin thickness. RDT measured by OCT showed optical values range between 140 and 2300 μm, which corresponded to the range of 92-1524 μm on micro-CT imaging. A strong correlation was found between the 2 techniques (r = 0.96, P < .001). Further analysis indicated linear regression with a slope of 1.54 and no intercept, closely matching the bulk refractive index of dentin. OCT enables visualization of anatomic structures during deep caries excavation. Exposure of the vital dental pulp because of the removal of very thin remaining coronal dentin can be avoided with this novel noninvasive technique. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  20. Development of a stationary chest tomosynthesis system using carbon nanotube x-ray source array

    NASA Astrophysics Data System (ADS)

    Shan, Jing

    X-ray imaging system has shown its usefulness for providing quick and easy access of imaging in both clinic settings and emergency situations. It greatly improves the workflow in hospitals. However, the conventional radiography systems, lacks 3D information in the images. The tissue overlapping issue in the 2D projection image result in low sensitivity and specificity. Both computed tomography and digital tomosynthesis, the two conventional 3D imaging modalities, requires a complex gantry to mechanically translate the x-ray source to various positions. Over the past decade, our research group has developed a carbon nanotube (CNT) based x-ray source technology. The CNT x-ray sources allows compacting multiple x-ray sources into a single x-ray tube. Each individual x-ray source in the source array can be electronically switched. This technology allows development of stationary tomographic imaging modalities without any complex mechanical gantries. The goal of this work is to develop a stationary digital chest tomosynthesis (s-DCT) system, and implement it for a clinical trial. The feasibility of s-DCT was investigated. It is found that the CNT source array can provide sufficient x-ray output for chest imaging. Phantom images have shown comparable image qualities as conventional DCT. The s-DBT system was then used to study the effects of source array configurations and tomosynthesis image quality, and the feasibility of a physiological gated s-DCT. Using physical measures for spatial resolution, the 2D source configuration was shown to have improved depth resolution and comparable in-plane resolution. The prospective gated tomosynthesis images have shown substantially reduction of image blur associated with lung motions. The system was also used to investigate the feasibility of using s-DCT as a diagnosis and monitoring tools for cystic fibrosis patients. A new scatter reduction methods for s-DCT was also studied. Finally, a s-DCT system was constructed by retrofitting the source array to a Carestream digital radiography system. The system passed the electrical and radiation safety tests, and was installed in Marsico Hall. The patient trial started in March of 2015, and the first patient was successfully imaged.

  1. On the release of cppxfel for processing X-ray free-electron laser images.

    PubMed

    Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian

    2016-06-01

    As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.

  2. On the release of cppxfel for processing X-ray free-electron laser images

    DOE PAGES

    Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...

    2016-05-11

    As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less

  3. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages.

    PubMed

    Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B

    2016-11-15

    A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.

  4. Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; Douglas, J.; Patterson, L.

    2015-07-15

    Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less

  5. Vector-Based Data Services for NASA Earth Science

    NASA Astrophysics Data System (ADS)

    Rodriguez, J.; Roberts, J. T.; Ruvane, K.; Cechini, M. F.; Thompson, C. K.; Boller, R. A.; Baynes, K.

    2016-12-01

    Vector data sources offer opportunities for mapping and visualizing science data in a way that allows for more customizable rendering and deeper data analysis than traditional raster images, and popular formats like GeoJSON and Mapbox Vector Tiles allow diverse types of geospatial data to be served in a high-performance and easily consumed-package. Vector data is especially suited to highly dynamic mapping applications and visualization of complex datasets, while growing levels of support for vector formats and features in open-source mapping clients has made utilizing them easier and more powerful than ever. NASA's Global Imagery Browse Services (GIBS) is working to make NASA data more easily and conveniently accessible than ever by serving vector datasets via GeoJSON, Mapbox Vector Tiles, and raster images. This presentation will review these output formats, the services, including WFS, WMS, and WMTS, that can be used to access the data, and some ways in which vector sources can be utilized in popular open-source mapping clients like OpenLayers. Lessons learned from GIBS' recent move towards serving vector will be discussed, as well as how to use GIBS open source software to create, configure, and serve vector data sources using Mapserver and the GIBS OnEarth Apache module.

  6. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  7. Imaging C. elegans embryos using an epifluorescent microscope and open source software.

    PubMed

    Verbrugghe, Koen J C; Chan, Raymond C

    2011-03-24

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples(1,2). Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage(3), thus providing an ideal experiment model for studying questions in cell biology(4,5)and development(6-9). C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis(10,11)) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis(12-15)). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters(16,17). These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo(18-21). In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process.

  8. Microscale reconstruction of biogeochemical substrates using multimode X-ray tomography and scanning electron microscopy

    NASA Astrophysics Data System (ADS)

    Miller, M.; Miller, E.; Liu, J.; Lund, R. M.; McKinley, J. P.

    2012-12-01

    X-ray computed tomography (CT), scanning electron microscopy (SEM), electron microprobe analysis (EMP), and computational image analysis are mature technologies used in many disciplines. Cross-discipline combination of these imaging and image-analysis technologies is the focus of this research, which uses laboratory and light-source resources in an iterative approach. The objective is to produce images across length scales, taking advantage of instrumentation that is optimized for each scale, and to unify them into a single compositional reconstruction. Initially, CT images will be collected using both x-ray absorption and differential phase contrast modes. The imaged sample will then be physically sectioned and the exposed surfaces imaged and characterized via SEM/EMP. The voxel slice corresponding to the physical sample surface will be isolated computationally, and the volumetric data will be combined with two-dimensional SEM images along CT image planes. This registration step will take advantage of the similarity between the X-ray absorption (CT) and backscattered electron (SEM) coefficients (both proportional to average atomic number in the interrogated volume) as well as the images' mutual information. Elemental and solid-phase distributions on the exposed surfaces, co-registered with SEM images, will be mapped using EMP. The solid-phase distribution will be propagated into three-dimensional space using computational methods relying on the estimation of compositional distributions derived from the CT data. If necessary, solid-phase and pore-space boundaries will be resolved using X-ray differential phase contrast tomography, x-ray fluorescence tomography, and absorption-edge microtomography at a light-source facility. Computational methods will be developed to register and model images collected over varying scales and data types. Image resolution, physically and dynamically, is qualitatively different for the electron microscopy and CT methodologies. Routine CT images are resolved at 10-20 μm, while SEM images are resolved at 10-20 nm; grayscale values vary according to collection time and instrument sensitivity; and compositional sensitivities via EMP vary in interrogation volume and scale. We have so far successfully registered SEM imagery within a multimode tomographic volume and have used standard methods to isolate pore space within the volume. We are developing a three-dimensional solid-phase identification and registration method that is constrained by bulk-sample X-ray diffraction Rietveld refinements. The results of this project will prove useful in fields that require the fine-scale definition of solid-phase distributions and relationships, and could replace more inefficient methods for making these estimations.

  9. An open source software for analysis of dynamic contrast enhanced magnetic resonance images: UMMPerfusion revisited.

    PubMed

    Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald

    2016-01-14

    Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.

  10. THz optical design considerations and optimization for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Sung, Shijun; Garritano, James; Bajwa, Neha; Nowroozi, Bryan; Llombart, Nuria; Grundfest, Warren; Taylor, Zachary D.

    2014-09-01

    THz imaging system design will play an important role making possible imaging of targets with arbitrary properties and geometries. This study discusses design consideration and imaging performance optimization techniques in THz quasioptical imaging system optics. Analysis of field and polarization distortion by off-axis parabolic (OAP) mirrors in THz imaging optics shows how distortions are carried in a series of mirrors while guiding the THz beam. While distortions of the beam profile by individual mirrors are not significant, these effects are compounded by a series of mirrors in antisymmetric orientation. It is shown that symmetric orientation of the OAP mirror effectively cancels this distortion to recover the original beam profile. Additionally, symmetric orientation can correct for some geometrical off-focusing due to misalignment. We also demonstrate an alternative method to test for overall system optics alignment by investigating the imaging performance of the tilted target plane. Asymmetric signal profile as a function of the target plane's tilt angle indicates when one or more imaging components are misaligned, giving a preferred tilt direction. Such analysis can offer additional insight into often elusive source device misalignment at an integrated system. Imaging plane tilting characteristics are representative of a 3-D modulation transfer function of the imaging system. A symmetric tilted plane is preferred to optimize imaging performance.

  11. Integration of a clinical trial database with a PACS

    NASA Astrophysics Data System (ADS)

    van Herk, M.

    2014-03-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  12. On the analysis of large data sets

    NASA Astrophysics Data System (ADS)

    Ruch, Gerald T., Jr.

    We present a set of tools and techniques for performing detailed comparisons between computational models with high dimensional parameter spaces and large sets of archival data. By combining a principal component analysis of a large grid of samples from the model with an artificial neural network, we create a powerful data visualization tool as well as a way to robustly recover physical parameters from a large set of experimental data. Our techniques are applied in the context of circumstellar disks, the likely sites of planetary formation. An analysis is performed applying the two layer approximation of Chiang et al. (2001) and Dullemond et al. (2001) to the archive created by the Spitzer Space Telescope Cores to Disks Legacy program. We find two populations of disk sources. The first population is characterized by the lack of a puffed up inner rim while the second population appears to contain an inner rim which casts a shadow across the disk. The first population also exhibits a trend of increasing spectral index while the second population exhibits a decreasing trend in the strength of the 20 mm silicate emission feature. We also present images of the giant molecular cloud W3 obtained with the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on board the Spitzer Space Telescope. The images encompass the star forming regions W3 Main, W3(OH), and a region that we refer to as the Central Cluster which encloses the emission nebula IC 1795. We present a star count analysis of the point sources detected in W3. The star count analysis shows that the stellar population of the Central Cluster, when compared to that in the background, contains an over density of sources. The Central Cluster also contains an excess of sources with colors consistent with Class II Young Stellar Objects (YSOs). A analysis of the color-color diagrams also reveals a large number of Class II YSOs in the Central Cluster. Our results suggest that an earlier epoch of star formation created the Central Cluster, created a cavity, and triggered the active star formation in the W3 Main and W3(OH) regions. We also detect a new outflow and its candidate exciting star.

  13. LEDs as light source: examining quality of acquired images

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic; Funtanilla, Jeng; Hernandez, Jose

    2004-05-01

    Recent advances in technology have made light emitting diodes (LEDs) viable in a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. This paper presents the results of comparing images taken by a videoscope using two different light sources. One of the sources is the internal metal halide lamp and the other is a LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. The paper will present the results and discuss the usefulness and shortcomings of various comparison methods.

  14. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  15. Imaging spectroscopy: Earth and planetary remote sensing with the USGS Tetracorder and expert systems

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.

    2003-01-01

    Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.

  16. FIMTrack: An open source tracking and locomotion analysis software for small animals.

    PubMed

    Risse, Benjamin; Berh, Dimitri; Otto, Nils; Klämbt, Christian; Jiang, Xiaoyi

    2017-05-01

    Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.

  17. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  18. Gold nanoparticle contrast agents in advanced X-ray imaging technologies.

    PubMed

    Ahn, Sungsook; Jung, Sung Yong; Lee, Sang Joon

    2013-05-17

    Recently, there has been significant progress in the field of soft- and hard-X-ray imaging for a wide range of applications, both technically and scientifically, via developments in sources, optics and imaging methodologies. While one community is pursuing extensive applications of available X-ray tools, others are investigating improvements in techniques, including new optics, higher spatial resolutions and brighter compact sources. For increased image quality and more exquisite investigation on characteristic biological phenomena, contrast agents have been employed extensively in imaging technologies. Heavy metal nanoparticles are excellent absorbers of X-rays and can offer excellent improvements in medical diagnosis and X-ray imaging. In this context, the role of gold (Au) is important for advanced X-ray imaging applications. Au has a long-history in a wide range of medical applications and exhibits characteristic interactions with X-rays. Therefore, Au can offer a particular advantage as a tracer and a contrast enhancer in X-ray imaging technologies by sensing the variation in X-ray attenuation in a given sample volume. This review summarizes basic understanding on X-ray imaging from device set-up to technologies. Then this review covers recent studies in the development of X-ray imaging techniques utilizing gold nanoparticles (AuNPs) and their relevant applications, including two- and three-dimensional biological imaging, dynamical processes in a living system, single cell-based imaging and quantitative analysis of circulatory systems and so on. In addition to conventional medical applications, various novel research areas have been developed and are expected to be further developed through AuNP-based X-ray imaging technologies.

  19. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  20. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.

    PubMed

    Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

    2010-01-01

    An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

  1. Design of an automated imaging system for use in a space experiment

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.

    1991-01-01

    An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.

  2. Emerging imaging tools for use with traumatic brain injury research.

    PubMed

    Hunter, Jill V; Wilde, Elisabeth A; Tong, Karen A; Holshouser, Barbara A

    2012-03-01

    This article identifies emerging neuroimaging measures considered by the inter-agency Pediatric Traumatic Brain Injury (TBI) Neuroimaging Workgroup. This article attempts to address some of the potential uses of more advanced forms of imaging in TBI as well as highlight some of the current considerations and unresolved challenges of using them. We summarize emerging elements likely to gain more widespread use in the coming years, because of 1) their utility in diagnosis, prognosis, and understanding the natural course of degeneration or recovery following TBI, and potential for evaluating treatment strategies; 2) the ability of many centers to acquire these data with scanners and equipment that are readily available in existing clinical and research settings; and 3) advances in software that provide more automated, readily available, and cost-effective analysis methods for large scale data image analysis. These include multi-slice CT, volumetric MRI analysis, susceptibility-weighted imaging (SWI), diffusion tensor imaging (DTI), magnetization transfer imaging (MTI), arterial spin tag labeling (ASL), functional MRI (fMRI), including resting state and connectivity MRI, MR spectroscopy (MRS), and hyperpolarization scanning. However, we also include brief introductions to other specialized forms of advanced imaging that currently do require specialized equipment, for example, single photon emission computed tomography (SPECT), positron emission tomography (PET), encephalography (EEG), and magnetoencephalography (MEG)/magnetic source imaging (MSI). Finally, we identify some of the challenges that users of the emerging imaging CDEs may wish to consider, including quality control, performing multi-site and longitudinal imaging studies, and MR scanning in infants and children.

  3. Multi-channel Analysis of Passive Surface Waves (MAPS)

    NASA Astrophysics Data System (ADS)

    Xia, J.; Cheng, F. Mr; Xu, Z.; Wang, L.; Shen, C.; Liu, R.; Pan, Y.; Mi, B.; Hu, Y.

    2017-12-01

    Urbanization is an inevitable trend in modernization of human society. In the end of 2013 the Chinese Central Government launched a national urbanization plan—"Three 100 Million People", which aggressively and steadily pushes forward urbanization. Based on the plan, by 2020, approximately 100 million people from rural areas will permanently settle in towns, dwelling conditions of about 100 million people in towns and villages will be improved, and about 100 million people in the central and western China will permanently settle in towns. China's urbanization process will run at the highest speed in the urbanization history of China. Environmentally friendly, non-destructive and non-invasive geophysical assessment method has played an important role in the urbanization process in China. Because human noise and electromagnetic field due to industrial life, geophysical methods already used in urban environments (gravity, magnetics, electricity, seismic) face great challenges. But humanity activity provides an effective source of passive seismic methods. Claerbout pointed out that wavefileds that are received at one point with excitation at the other point can be reconstructed by calculating the cross-correlation of noise records at two surface points. Based on this idea (cross-correlation of two noise records) and the virtual source method, we proposed Multi-channel Analysis of Passive Surface Waves (MAPS). MAPS mainly uses traffic noise recorded with a linear receiver array. Because Multi-channel Analysis of Surface Waves can produces a shear (S) wave velocity model with high resolution in shallow part of the model, MPAS combines acquisition and processing of active source and passive source data in a same flow, which does not require to distinguish them. MAPS is also of ability of real-time quality control of noise recording that is important for near-surface applications in urban environment. The numerical and real-world examples demonstrated that MAPS can be used for accurate and fast imaging of high-frequency surface wave energy, and some examples also show that high quality imaging similar to those with active sources can be generated only by the use of a few minutes of noise. The use of cultural noise in town, MAPS can image S-wave velocity structure from the ground surface to hundreds of meters depth.

  4. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We propose to construct exploration-type trace gathers, systematically investigate the potential to improve the quality and regularity of realistic synthetic earthquake data and present attempts at separating transmitted and back-scattered wave fields for the improved imaging of Earth's large-scale discontinuities.

  5. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  6. Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, P.; Zbijewski, W.; Gang, G. J.

    2011-10-15

    Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less

  7. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  8. Photoacoustic image reconstruction: a quantitative analysis

    NASA Astrophysics Data System (ADS)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  9. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  10. Development of an imaging system for single droplet characterization using a droplet generator.

    PubMed

    Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D

    2012-01-01

    The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.

  11. Boiler Tube Corrosion Characterization with a Scanning Thermal Line

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Jacobstein, Ronald; Reilly, Thomas

    2001-01-01

    Wall thinning due to corrosion in utility boiler water wall tubing is a significant operational concern for boiler operators. Historically, conventional ultrasonics has been used for inspection of these tubes. Unfortunately, ultrasonic inspection is very manpower intense and slow. Therefore, thickness measurements are typically taken over a relatively small percentage of the total boiler wall and statistical analysis is used to determine the overall condition of the boiler tubing. Other inspection techniques, such as electromagnetic acoustic transducer (EMAT), have recently been evaluated, however they provide only a qualitative evaluation - identifying areas or spots where corrosion has significantly reduced the wall thickness. NASA Langley Research Center, in cooperation with ThermTech Services, has developed a thermal NDE technique designed to quantitatively measure the wall thickness and thus determine the amount of material thinning present in steel boiler tubing. The technique involves the movement of a thermal line source across the outer surface of the tubing followed by an infrared imager at a fixed distance behind the line source. Quantitative images of the material loss due to corrosion are reconstructed from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to reconstruct images of flaws. The application of the thermal line source coupled with the analysis technique represents a significant improvement in the inspection speed and accuracy for large structures such as boiler water walls. A theoretical basis for the technique will be presented to establish the quantitative nature of the technique. Further, a dynamic calibration system will be presented for the technique that allows the extraction of thickness information from the temperature data. Additionally, the results of the application of this technology to actual water wall tubing samples and in-situ inspections will be presented.

  12. The Mapping X-Ray Fluorescence Spectrometer (MAPX)

    NASA Technical Reports Server (NTRS)

    Blake, David; Sarrazin, Philippe; Bristow, Thomas; Downs, Robert; Gailhanou, Marc; Marchis, Franck; Ming, Douglas; Morris, Richard; Sole, Vincente Armando; Thompson, Kathleen; hide

    2016-01-01

    MapX will provide elemental imaging at =100 micron spatial resolution over 2.5 X 2.5 centimeter areas, yielding elemental chemistry at or below the scale length where many relict physical, chemical, and biological features can be imaged and interpreted in ancient rocks. MapX is a full-frame spectroscopic imager positioned on soil or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample surface with X-rays or alpha-particles / gamma rays, resulting in sample X-ray Fluorescence (XRF). Fluoresced X-rays pass through an X-ray lens (X-ray µ-Pore Optic, "MPO") that projects a spatially resolved image of the X-rays onto a CCD. The CCD is operated in single photon counting mode so that the positions and energies of individual photons are retained. In a single analysis, several thousand frames are stored and processed. A MapX experiment provides elemental maps having a spatial resolution of =100 micron and quantitative XRF spectra from Regions of Interest (ROI) 2 centimers = x = 100 micron. ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. The MapX geometry is being refined with ray-tracing simulations and with synchrotron experiments at SLAC. Source requirements are being determined through Monte Carlo modeling and experiment using XMIMSIM [1], GEANT4 [2] and PyMca [3] and a dedicated XRF test fixture. A flow-down of requirements for both tube and radioisotope sources is being developed from these experiments. In addition to Mars lander and rover missions, MapX could be used for landed science on other airless bodies (Phobos/Deimos, Comet nucleus, asteroids, the Earth's moon, and the icy satellites of the outer planets, including Europa.

  13. Regional aeolian dynamics and sand mixing in the Gran Desierto - Evidence from Landsat Thematic Mapper images

    NASA Technical Reports Server (NTRS)

    Blount, Grady; Greeley, Ronald; Christensen, Phillip R.; Smith, Milton O.; Adams, John B.

    1990-01-01

    Mesoscale mapping of spatial variations in sand composition of the Gran Desierto (Sonora, Mexico) was carried out on multispectral Landsat TM images of this region, making it possible to examine the dynamic development of sand sheets and dunes. Compositions determined from remote imagery were found to agree well with samples from selected areas. The sand populations delineated were used to describe the sediment source areas, transport paths, and deposition sites. The image analysis revealed important compositional variations aver large areas that were not readily apparent in the field data.

  14. Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-02-01

    I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.

  15. Non-destructive terahertz imaging of illicit drugs using spectral fingerprints

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuuki; Inoue, Hiroyuki

    2003-10-01

    The absence of non-destructive inspection techniques for illicit drugs hidden in mail envelopes has resulted in such drugs being smuggled across international borders freely. We have developed a novel basic technology for terahertz imaging, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. The spatial distributions of the targets are obtained from terahertz multispectral transillumination images, using absorption spectra measured with a tunable terahertz-wave source. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  16. Background derivation and image flattening: getimages

    NASA Astrophysics Data System (ADS)

    Men'shchikov, A.

    2017-11-01

    Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.

  17. InSAR Surface Deformation and Source Modelling at Semisopochnoi Island During the 2014 and 2015 Seismic Swarms with Constraints from Geochemical and Seismic Analysis

    NASA Astrophysics Data System (ADS)

    DeGrandpre, K.; Pesicek, J. D.; Lu, Z.

    2017-12-01

    During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi Island in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs interferometric synthetic aperture radar (InSAR) techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The InSAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in Mogi, Okada, spheroid, and ellipsoid source models in order to define the three-dimensional location and volume change required for a source at the volcano to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. The source models are fit to this void and pressure estimates from geochemical analysis are used to verify the storage depth of magmas at Semisopochnoi. Comparisons of calculated source cavity, magma injection, and surface deformation volumes are made in order to assess the reality behind the various modelling estimates. Incorporating geochemical and seismic data to provide constraints on surface deformation source inversions provides an interdisciplinary approach that can be used to make more accurate interpretations of dynamic observations.

  18. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  19. Determination of renewable energy yield from mixed waste material from the use of novel image analysis methods.

    PubMed

    Wagland, S T; Dudley, R; Naftaly, M; Longhurst, P J

    2013-11-01

    Two novel techniques are presented in this study which together aim to provide a system able to determine the renewable energy potential of mixed waste materials. An image analysis tool was applied to two waste samples prepared using known quantities of source-segregated recyclable materials. The technique was used to determine the composition of the wastes, where through the use of waste component properties the biogenic content of the samples was calculated. The percentage renewable energy determined by image analysis for each sample was accurate to within 5% of the actual values calculated. Microwave-based multiple-point imaging (AutoHarvest) was used to demonstrate the ability of such a technique to determine the moisture content of mixed samples. This proof-of-concept experiment was shown to produce moisture measurement accurate to within 10%. Overall, the image analysis tool was able to determine the renewable energy potential of the mixed samples, and the AutoHarvest should enable the net calorific value calculations through the provision of moisture content measurements. The proposed system is suitable for combustion facilities, and enables the operator to understand the renewable energy potential of the waste prior to combustion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Integrated image data and medical record management for rare disease registries. A general framework and its instantiation to theGerman Calciphylaxis Registry.

    PubMed

    Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula

    2014-12-01

    Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.

Top