Sample records for complex image analysis

  1. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  2. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  3. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  4. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  5. Contextual analysis of immunological response through whole-organ fluorescent imaging.

    PubMed

    Woodruff, Matthew C; Herndon, Caroline N; Heesters, B A; Carroll, Michael C

    2013-09-01

    As fluorescent microscopy has developed, significant insights have been gained into the establishment of immune response within secondary lymphoid organs, particularly in draining lymph nodes. While established techniques such as confocal imaging and intravital multi-photon microscopy have proven invaluable, they provide limited insight into the architectural and structural context in which these responses occur. To interrogate the role of the lymph node environment in immune response effectively, a new set of imaging tools taking into account broader architectural context must be implemented into emerging immunological questions. Using two different methods of whole-organ imaging, optical clearing and three-dimensional reconstruction of serially sectioned lymph nodes, fluorescent representations of whole lymph nodes can be acquired at cellular resolution. Using freely available post-processing tools, images of unlimited size and depth can be assembled into cohesive, contextual snapshots of immunological response. Through the implementation of robust iterative analysis techniques, these highly complex three-dimensional images can be objectified into sortable object data sets. These data can then be used to interrogate complex questions at the cellular level within the broader context of lymph node biology. By combining existing imaging technology with complex methods of sample preparation and capture, we have developed efficient systems for contextualizing immunological phenomena within lymphatic architecture. In combination with robust approaches to image analysis, these advances provide a path to integrating scientific understanding of basic lymphatic biology into the complex nature of immunological response.

  6. The performance of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury: a meta-analysis.

    PubMed

    Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J

    2015-06-01

    The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.

  7. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    PubMed

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  8. Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.

    PubMed

    Ristic, Dejan; Sanchez, Humberto; Wyman, Claire

    2011-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  9. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.

  10. Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.

    PubMed

    Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire

    2018-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  11. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies.

  12. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  13. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  14. Automated image-based phenotypic analysis in zebrafish embryos

    PubMed Central

    Vogt, Andreas; Cholewinski, Andrzej; Shen, Xiaoqiang; Nelson, Scott; Lazo, John S.; Tsang, Michael; Hukriede, Neil A.

    2009-01-01

    Presently, the zebrafish is the only vertebrate model compatible with contemporary paradigms of drug discovery. Zebrafish embryos are amenable to automation necessary for high-throughput chemical screens, and optical transparency makes them potentially suited for image-based screening. However, the lack of tools for automated analysis of complex images presents an obstacle to utilizing the zebrafish as a high-throughput screening model. We have developed an automated system for imaging and analyzing zebrafish embryos in multi-well plates regardless of embryo orientation and without user intervention. Images of fluorescent embryos were acquired on a high-content reader and analyzed using an artificial intelligence-based image analysis method termed Cognition Network Technology (CNT). CNT reliably detected transgenic fluorescent embryos (Tg(fli1:EGFP)y1) arrayed in 96-well plates and quantified intersegmental blood vessel development in embryos treated with small molecule inhibitors of anigiogenesis. The results demonstrate it is feasible to adapt image-based high-content screening methodology to measure complex whole organism phenotypes. PMID:19235725

  15. Topological image texture analysis for quality assessment

    NASA Astrophysics Data System (ADS)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  16. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  17. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  18. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  19. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  20. Real-time MRI guidance of cardiac interventions.

    PubMed

    Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A

    2017-10-01

    Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    PubMed

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  2. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  3. New methods for image collection and analysis in scanning Auger microscopy

    NASA Technical Reports Server (NTRS)

    Browning, R.

    1985-01-01

    While scanning Auger micrographs are used extensively for illustrating the stoichiometry of complex surfaces and for indicating areas of interest for fine point Auger spectroscopy, there are many problems in the quantification and analysis of Auger images. These problems include multiple contrast mechanisms and the lack of meaningful relationships with other Auger data. Collection of multielemental Auger images allows some new approaches to image analysis and presentation. Information about the distribution and quantity of elemental combinations at a surface are retrievable, and particular combinations of elements can be imaged, such as alloy phases. Results from the precipitate hardened alloy Al-2124 illustrate multispectral Auger imaging.

  4. Analysis and improvement of the quantum image matching

    NASA Astrophysics Data System (ADS)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  5. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  6. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  7. Pathophysiology of Degenerative Mitral Regurgitation: New 3-Dimensional Imaging Insights.

    PubMed

    Antoine, Clemence; Mantovani, Francesca; Benfari, Giovanni; Mankad, Sunil V; Maalouf, Joseph F; Michelena, Hector I; Enriquez-Sarano, Maurice

    2018-01-01

    Despite its high prevalence, little is known about mechanisms of mitral regurgitation in degenerative mitral valve disease apart from the leaflet prolapse itself. Mitral valve is a complex structure, including mitral annulus, mitral leaflets, papillary muscles, chords, and left ventricular walls. All these structures are involved in physiological and pathological functioning of this valvuloventricular complex but up to now were difficult to analyze because of inherent limitations of 2-dimensional imaging. The advent of 3-dimensional echocardiography, computed tomography, and cardiac magnetic resonance imaging overcoming these limitations provides new insights into mechanistic analysis of degenerative mitral regurgitation. This review will detail the contribution of quantitative and qualitative dynamic analysis of mitral annulus and mitral leaflets by new imaging methods in the understanding of degenerative mitral regurgitation pathophysiology. © 2018 American Heart Association, Inc.

  8. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  9. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  10. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  11. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets.

    PubMed

    Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M

    2015-10-01

    Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.

  12. Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications

    NASA Astrophysics Data System (ADS)

    Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.

    2002-08-01

    We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.

  13. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  14. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS

    2010-04-06

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  15. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C.G.; Siddons, D.P.; Kirkham, R.

    2010-05-25

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  16. Genetic programming approach to evaluate complexity of texture images

    NASA Astrophysics Data System (ADS)

    Ciocca, Gianluigi; Corchs, Silvia; Gasparini, Francesca

    2016-11-01

    We adopt genetic programming (GP) to define a measure that can predict complexity perception of texture images. We perform psychophysical experiments on three different datasets to collect data on the perceived complexity. The subjective data are used for training, validation, and test of the proposed measure. These data are also used to evaluate several possible candidate measures of texture complexity related to both low level and high level image features. We select four of them (namely roughness, number of regions, chroma variance, and memorability) to be combined in a GP framework. This approach allows a nonlinear combination of the measures and could give hints on how the related image features interact in complexity perception. The proposed complexity measure M exhibits Pearson correlation coefficients of 0.890 on the training set, 0.728 on the validation set, and 0.724 on the test set. M outperforms each of all the single measures considered. From the statistical analysis of different GP candidate solutions, we found that the roughness measure evaluated on the gray level image is the most dominant one, followed by the memorability, the number of regions, and finally the chroma variance.

  17. Automated Selection of Regions of Interest for Intensity-based FRET Analysis of Transferrin Endocytic Trafficking in Normal vs. Cancer Cells

    PubMed Central

    Talati, Ronak; Vanderpoel, Andrew; Eladdadi, Amina; Anderson, Kate; Abe, Ken; Barroso, Margarida

    2013-01-01

    The overexpression of certain membrane-bound receptors is a hallmark of cancer progression and it has been suggested to affect the organization, activation, recycling and down-regulation of receptor-ligand complexes in human cancer cells. Thus, comparing receptor trafficking pathways in normal vs. cancer cells requires the ability to image cells expressing dramatically different receptor expression levels. Here, we have presented a significant technical advance to the analysis and processing of images collected using intensity based Förster resonance energy transfer (FRET) confocal microscopy. An automated Image J macro was developed to select region of interests (ROI) based on intensity and statistical-based thresholds within cellular images with reduced FRET signal. Furthermore, SSMD (strictly standardized mean differences), a statistical signal-to-noise ratio (SNR) evaluation parameter, was used to validate the quality of FRET analysis, in particular of ROI database selection. The Image J ROI selection macro together with SSMD as an evaluation parameter of SNR levels, were used to investigate the endocytic recycling of Tfn-TFR complexes at nanometer range resolution in human normal vs. breast cancer cells expressing significantly different levels of endogenous TFR. Here, the FRET-based assay demonstrates that Tfn-TFR complexes in normal epithelial vs. breast cancer cells show a significantly different E% behavior during their endocytic recycling pathway. Since E% is a relative measure of distance, we propose that these changes in E% levels represent conformational changes in Tfn-TFR complexes during endocytic pathway. Thus, our results indicate that Tfn-TFR complexes undergo different conformational changes in normal vs. cancer cells, indicating that the organization of Tfn-TFR complexes at the nanometer range is significantly altered during the endocytic recycling pathway in cancer cells. In summary, improvements in the automated selection of FRET ROI datasets allowed us to detect significant changes in E% with potential biological significance in human normal vs. cancer cells. PMID:23994873

  18. Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering

    NASA Astrophysics Data System (ADS)

    Samsonov, Alexei A.; Johnson, Chris R.

    2004-05-01

    MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.

  19. An architecture for a brain-image database

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.

    2000-01-01

    The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.

  20. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    PubMed Central

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  1. Analysis of live cell images: Methods, tools and opportunities.

    PubMed

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  2. Analysis on influence of installation error of off-axis three-mirror optical system on imaging line-of-sight

    NASA Astrophysics Data System (ADS)

    Gao, Lingyu; Li, Xinghua; Guo, Qianrui; Quan, Jing; Hu, Zhengyue; Su, Zhikun; Zhang, Dong; Liu, Peilu; Li, Haopeng

    2018-01-01

    The internal structure of off-axis three-mirror system is commonly complex. The mirror installation error in assembly always affects the imaging line-of-sight and further degrades the image quality. Due to the complexity of the optical path in off-axis three-mirror optical system, the straightforward theoretical analysis on the variations of imaging line-of-sight is extremely difficult. In order to simplify the theoretical analysis, an equivalent single-mirror system is proposed and presented in this paper. In addition, the mathematical model of single-mirror system is established and the accurate expressions of imaging coordinate are derived. Utilizing the simulation software ZEMAX, off-axis three-mirror model and single-mirror model are both established. By adjusting the position of mirror and simulating the line-of-sight rotation of optical system, the variations of imaging coordinates are clearly observed. The final simulation results include: in off-axis three-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is approximately 30 um/″; in single-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is 31.5 um/″. Compared to the simulation results of the off-axis three-mirror model, the 5% relative error of single-mirror model analysis highly satisfies the requirement of equivalent analysis and also verifies its validity. This paper presents a new method to analyze the installation error of the mirror in the off-axis three-mirror system influencing on the imaging line-of-sight. Moreover, the off-axis three-mirror model is totally equivalent to the single-mirror model in theoretical analysis.

  3. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  4. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  5. Quantitative Medical Image Analysis for Clinical Development of Therapeutics

    NASA Astrophysics Data System (ADS)

    Analoui, Mostafa

    There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.

  6. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    PubMed

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  7. Quantitative real-time analysis of collective cancer invasion and dissemination

    NASA Astrophysics Data System (ADS)

    Ewald, Andrew J.

    2015-05-01

    A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.

  8. Systems-level analysis of microbial community organization through combinatorial labeling and spectral imaging.

    PubMed

    Valm, Alex M; Mark Welch, Jessica L; Rieken, Christopher W; Hasegawa, Yuko; Sogin, Mitchell L; Oldenbourg, Rudolf; Dewhirst, Floyd E; Borisy, Gary G

    2011-03-08

    Microbes in nature frequently function as members of complex multitaxon communities, but the structural organization of these communities at the micrometer level is poorly understood because of limitations in labeling and imaging technology. We report here a combinatorial labeling strategy coupled with spectral image acquisition and analysis that greatly expands the number of fluorescent signatures distinguishable in a single image. As an imaging proof of principle, we first demonstrated visualization of Escherichia coli labeled by fluorescence in situ hybridization (FISH) with 28 different binary combinations of eight fluorophores. As a biological proof of principle, we then applied this Combinatorial Labeling and Spectral Imaging FISH (CLASI-FISH) strategy using genus- and family-specific probes to visualize simultaneously and differentiate 15 different phylotypes in an artificial mixture of laboratory-grown microbes. We then illustrated the utility of our method for the structural analysis of a natural microbial community, namely, human dental plaque, a microbial biofilm. We demonstrate that 15 taxa in the plaque community can be imaged simultaneously and analyzed and that this community was dominated by early colonizers, including species of Streptococcus, Prevotella, Actinomyces, and Veillonella. Proximity analysis was used to determine the frequency of inter- and intrataxon cell-to-cell associations which revealed statistically significant intertaxon pairings. Cells of the genera Prevotella and Actinomyces showed the most interspecies associations, suggesting a central role for these genera in establishing and maintaining biofilm complexity. The results provide an initial systems-level structural analysis of biofilm organization.

  9. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  10. Cryo-EM of dynamic protein complexes in eukaryotic DNA replication.

    PubMed

    Sun, Jingchuan; Yuan, Zuanning; Bai, Lin; Li, Huilin

    2017-01-01

    DNA replication in Eukaryotes is a highly dynamic process that involves several dozens of proteins. Some of these proteins form stable complexes that are amenable to high-resolution structure determination by cryo-EM, thanks to the recent advent of the direct electron detector and powerful image analysis algorithm. But many of these proteins associate only transiently and flexibly, precluding traditional biochemical purification. We found that direct mixing of the component proteins followed by 2D and 3D image sorting can capture some very weakly interacting complexes. Even at 2D average level and at low resolution, EM images of these flexible complexes can provide important biological insights. It is often necessary to positively identify the feature-of-interest in a low resolution EM structure. We found that systematically fusing or inserting maltose binding protein (MBP) to selected proteins is highly effective in these situations. In this chapter, we describe the EM studies of several protein complexes involved in the eukaryotic DNA replication over the past decade or so. We suggest that some of the approaches used in these studies may be applicable to structural analysis of other biological systems. © 2016 The Protein Society.

  11. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamichhane, N; Johnson, P; Chinea, F

    Purpose: To evaluate the correlation between image features and the accuracy of manually drawn target contours on synthetic PET images Methods: A digital PET phantom was used in combination with Monte Carlo simulation to create a set of 26 simulated PET images featuring a variety of tumor shapes and activity heterogeneity. These tumor volumes were used as a gold standard in comparisons with manual contours delineated by 10 radiation oncologist on the simulated PET images. Metrics used to evaluate segmentation accuracy included the dice coefficient, false positive dice, false negative dice, symmetric mean absolute surface distance, and absolute volumetric difference.more » Image features extracted from the simulated tumors consisted of volume, shape complexity, mean curvature, and intensity contrast along with five texture features derived from the gray-level neighborhood difference matrices including contrast, coarseness, busyness, strength, and complexity. Correlation between these features and contouring accuracy were examined. Results: Contour accuracy was reasonably well correlated with a variety of image features. Dice coefficient ranged from 0.7 to 0.90 and was correlated closely with contrast (r=0.43, p=0.02) and complexity (r=0.5, p<0.001). False negative dice ranged from 0.10 to 0.50 and was correlated closely with contrast (r=0.68, p<0.001) and complexity (r=0.66, p<0.001). Absolute volumetric difference ranged from 0.0002 to 0.67 and was correlated closely with coarseness (r=0.46, p=0.02) and complexity (r=0.49, p=0.008). Symmetric mean absolute difference ranged from 0.02 to 1 and was correlated closely with mean curvature (r=0.57, p=0.02) and contrast (r=0.6, p=0.001). Conclusion: The long term goal of this study is to assess whether contouring variability can be reduced by providing feedback to the practitioner based on image feature analysis. The results are encouraging and will be used to develop a statistical model which will enable a prediction of contour accuracy based purely on image feature analysis.« less

  13. Analysis of interstellar cloud structure based on IRAS images

    NASA Technical Reports Server (NTRS)

    Scalo, John M.

    1992-01-01

    The goal of this project was to develop new tools for the analysis of the structure of densely sampled maps of interstellar star-forming regions. A particular emphasis was on the recognition and characterization of nested hierarchical structure and fractal irregularity, and their relation to the level of star formation activity. The panoramic IRAS images provided data with the required range in spatial scale, greater than a factor of 100, and in column density, greater than a factor of 50. In order to construct densely sampled column density maps of star-forming clouds, column density images of four nearby cloud complexes were constructed from IRAS data. The regions have various degrees of star formation activity, and most of them have probably not been affected much by the disruptive effects of young massive stars. The largest region, the Scorpius-Ophiuchus cloud complex, covers about 1000 square degrees (it was subdivided into a few smaller regions for analysis). Much of the work during the early part of the project focused on an 80 square degree region in the core of the Taurus complex, a well-studied region of low-mass star formation.

  14. [Quantitative data analysis for live imaging of bone.

    PubMed

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  15. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  16. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  17. Quantum watermarking scheme through Arnold scrambling and LSB steganography

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping

    2017-09-01

    Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.

  18. Revisiting Rossion and Pourtois with new ratings for automated complexity, familiarity, beauty, and encounter.

    PubMed

    Forsythe, Alex; Street, Nichola; Helmy, Mai

    2017-08-01

    Differences between norm ratings collected when participants are asked to consider more than one picture characteristic are contrasted with the traditional methodological approaches of collecting ratings separately for image constructs. We present data that suggest that reporting normative data, based on methodological procedures that ask participants to consider multiple image constructs simultaneously, could potentially confounded norm data. We provide data for two new image constructs, beauty and the extent to which participants encountered the stimuli in their everyday lives. Analysis of this data suggests that familiarity and encounter are tapping different image constructs. The extent to which an observer encounters an object predicts human judgments of visual complexity. Encountering an image was also found to be an important predictor of beauty, but familiarity with that image was not. Taken together, these results suggest that continuing to collect complexity measures from human judgments is a pointless exercise. Automated measures are more reliable and valid measures, which are demonstrated here as predicting human preferences.

  19. TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.

    PubMed

    Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D

    2018-05-08

    Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.

  20. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  1. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  2. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  3. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  4. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  5. SPR imaging based electronic tongue via landscape images for complex mixture analysis.

    PubMed

    Genua, Maria; Garçon, Laurie-Amandine; Mounier, Violette; Wehry, Hillary; Buhot, Arnaud; Billon, Martial; Calemczuk, Roberto; Bonnaffé, David; Hou, Yanxia; Livache, Thierry

    2014-12-01

    Electronic noses/tongues (eN/eT) have emerged as promising alternatives for analysis of complex mixtures in the domain of food and beverage quality control. We have recently developed an electronic tongue by combining surface plasmon resonance imaging (SPRi) with an array of non-specific and cross-reactive receptors prepared by simply mixing two small molecules in varying and controlled proportions and allowing the mixtures to self-assemble on the SPRi prism surface. The obtained eT generated novel and unique 2D continuous evolution profiles (CEPs) and 3D continuous evolution landscapes (CELs) based on which the differentiation of complex mixtures such as red wine, beer and milk were successful. The preliminary experiments performed for monitoring the deterioration of UHT milk demonstrated its potential for quality control applications. Furthermore, the eT exhibited good repeatability and stability, capable of operating after a minimum storage period of 5 months. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A Quantitative Microscopy Technique for Determining the Number of Specific Proteins in Cellular Compartments

    PubMed Central

    Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.

    2013-01-01

    This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731

  7. Single-Cell Analysis Using Hyperspectral Imaging Modalities.

    PubMed

    Mehta, Nishir; Shaik, Shahensha; Devireddy, Ram; Gartia, Manas Ranjan

    2018-02-01

    Almost a decade ago, hyperspectral imaging (HSI) was employed by the NASA in satellite imaging applications such as remote sensing technology. This technology has since been extensively used in the exploration of minerals, agricultural purposes, water resources, and urban development needs. Due to recent advancements in optical re-construction and imaging, HSI can now be applied down to micro- and nanometer scales possibly allowing for exquisite control and analysis of single cell to complex biological systems. This short review provides a description of the working principle of the HSI technology and how HSI can be used to assist, substitute, and validate traditional imaging technologies. This is followed by a description of the use of HSI for biological analysis and medical diagnostics with emphasis on single-cell analysis using HSI.

  8. Global and Local Translation Designs of Quantum Image Based on FRQI

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Tan, Canyun; Ian, Hou

    2017-04-01

    In this paper, two kinds of quantum image translation are designed based on FRQI, including global translation and local translation. Firstly, global translation is realized by employing adder modulo N, where all pixels in the image will be moved, and the circuit of right translation is designed. Meanwhile, left translation can also be implemented by using right translation. Complexity analysis shows that the circuits of global translation in this paper have lower complexity and cost less qubits. Secondly, local translation, consisted of single-column translation, multiple-columns translation and translation in the restricted area, is designed by adopting Gray code. In local translation, any parts of pixels in the image can be translated while other pixels remain unchanged. In order to lower complexity when the number of columns needing to be translated are more than one, multiple-columns translation is proposed, which has the approximate complexity with single-column translation. To perform multiple-columns translation, three conditions must be satisfied. In addition, all translations in this paper are cyclic.

  9. Lithological discrimination of accretionary complex (Sivas, northern Turkey) using novel hybrid color composites and field data

    NASA Astrophysics Data System (ADS)

    Özkan, Mutlu; Çelik, Ömer Faruk; Özyavaş, Aziz

    2018-02-01

    One of the most appropriate approaches to better understand and interpret geologic evolution of an accretionary complex is to make a detailed geologic map. The fact that ophiolite sequences consist of various rock types may require a unique image processing method to map each ophiolite body. The accretionary complex in the study area is composed mainly of ophiolitic and metamorphic rocks along with epi-ophiolitic sedimentary rocks. This paper attempts to map the Late Cretaceous accretionary complex in detail in northern Sivas (within İzmir-Ankara-Erzincan Suture Zone in Turkey) by the analysis of all of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands and field study. The new two hybrid color composite images yield satisfactory results in delineating peridotite, gabbro, basalt, and epi-ophiolitic sedimentary rocks of the accretionary complex in the study area. While the first hybrid color composite image consists of one principle component (PC) and two band ratios (PC1, 3/4, 4/6 in the RGB), the PC5, the original ASTER band 4 and the 3/4 band ratio images were assigned to the RGB colors to generate the second hybrid color composite image. In addition to that, the spectral indices derived from the ASTER thermal infrared (TIR) bands discriminate clearly ultramafic, siliceous, and carbonate rocks from adjacent lithologies at a regional scale. Peridotites with varying degrees of serpentinization illustrated as a single color were best identified in the spectral indices map. Furthermore, the boundaries of ophiolitic rocks based on fieldwork were outlined in detail in some parts of the study area by superimposing the resultant maps of ASTER maps on Google Earth images of finer spatial resolution. Eventually, the encouraging geologic map generated by the image analysis of ASTER data strongly correlates with lithological boundaries from a field survey.

  10. A three-dimensional image processing program for accurate, rapid, and semi-automated segmentation of neuronal somata with dense neurite outgrowth

    PubMed Central

    Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.

    2015-01-01

    Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609

  11. A novel imaging method for photonic crystal fiber fusion splicer

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Guo, Xuan

    2007-01-01

    Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.

  12. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  13. Biological and Clinical Aspects of Lanthanide Coordination Compounds

    PubMed Central

    Misra, Sudhindra N.; M., Indira Devi; Shukla, Ram S.

    2004-01-01

    The coordinating chemistry of lanthanides, relevant to the biological, biochemical and medical aspects, makes a significant contribution to understanding the basis of application of lanthanides, particularly in biological and medical systems. The importance of the applications of lanthanides, as an excellent diagnostic and prognostic probe in clinical diagnostics, and an anticancer material, is remarkably increasing. Lanthanide complexes based X-ray contrast imaging and lanthanide chelates based contrast enhancing agents for magnetic resonance imaging (MRI) are being excessively used in radiological analysis in our body systems. The most important property of the chelating agents, in lanthanide chelate complex, is its ability to alter the behaviour of lanthanide ion with which it binds in biological systems, and the chelation markedly modifies the biodistribution and excretion profile of the lanthanide ions. The chelating agents, especially aminopoly carboxylic acids, being hydrophilic, increase the proportion of their complex excreted from complexed lanthanide ion form biological systems. Lanthanide polyamino carboxylate-chelate complexes are used as contrast enhancing agents for Magnetic Resonance Imaging. Conjugation of antibodies and other tissue specific molecules to lanthanide chelates has led to a new type of specific MRI contrast agents and their conjugated MRI contrast agents with improved relaxivity, functioning in the body similar to drugs. Many specific features of contrast agent assisted MRI make it particularly effective for musculoskeletal and cerebrospinal imaging. Lanthanide-chelate contrast agents are effectively used in clinical diagnostic investigations involving cerebrospinal diseases and in evaluation of central nervous system. Chelated lanthanide complexes shift reagent aided 23Na NMR spectroscopic analysis is used in cellular, tissue and whole organ systems. PMID:18365075

  14. Steganalysis based on reducing the differences of image statistical characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  15. Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laskin, Julia; Lanekoff, Ingela

    2015-11-13

    Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less

  16. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    PubMed Central

    Tameem, Hussain Z.; Sinha, Usha S.

    2011-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520

  17. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    NASA Astrophysics Data System (ADS)

    Tameem, Hussain Z.; Sinha, Usha S.

    2007-11-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.

  18. Lucky Imaging: Improved Localization Accuracy for Single Molecule Imaging

    PubMed Central

    Cronin, Bríd; de Wet, Ben; Wallace, Mark I.

    2009-01-01

    We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells. PMID:19348772

  19. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  20. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  1. BASTet: Shareable and Reproducible Analysis and Visualization of Mass Spectrometry Imaging Data via OpenMSI.

    PubMed

    Rubel, Oliver; Bowen, Benjamin P

    2018-01-01

    Mass spectrometry imaging (MSI) is a transformative imaging method that supports the untargeted, quantitative measurement of the chemical composition and spatial heterogeneity of complex samples with broad applications in life sciences, bioenergy, and health. While MSI data can be routinely collected, its broad application is currently limited by the lack of easily accessible analysis methods that can process data of the size, volume, diversity, and complexity generated by MSI experiments. The development and application of cutting-edge analytical methods is a core driver in MSI research for new scientific discoveries, medical diagnostics, and commercial-innovation. However, the lack of means to share, apply, and reproduce analyses hinders the broad application, validation, and use of novel MSI analysis methods. To address this central challenge, we introduce the Berkeley Analysis and Storage Toolkit (BASTet), a novel framework for shareable and reproducible data analysis that supports standardized data and analysis interfaces, integrated data storage, data provenance, workflow management, and a broad set of integrated tools. Based on BASTet, we describe the extension of the OpenMSI mass spectrometry imaging science gateway to enable web-based sharing, reuse, analysis, and visualization of data analyses and derived data products. We demonstrate the application of BASTet and OpenMSI in practice to identify and compare characteristic substructures in the mouse brain based on their chemical composition measured via MSI.

  2. Combining atomic force and fluorescence microscopy for analysis of quantum-dot labeled protein–DNA complexes

    PubMed Central

    Ebenstein, Yuval; Gassman, Natalie; Kim, Soohong; Weiss, Shimon

    2011-01-01

    Atomic force microscopy (AFM) and fluorescence microscopy are widely used for the study of protein-DNA interactions. While AFM excels in its ability to elucidate structural detail and spatial arrangement, it lacks the ability to distinguish between similarly sized objects in a complex system. This information is readily accessible to optical imaging techniques via site-specific fluorescent labels, which enable the direct detection and identification of multiple components simultaneously. Here, we show how the utilization of semiconductor quantum dots (QDs), serving as contrast agents for both AFM topography and fluorescence imaging, facilitates the combination of both imaging techniques, and with the addition of a flow based DNA extension method for sample deposition, results in a powerful tool for the study of protein-DNA complexes. We demonstrate the inherent advantages of this novel combination of techniques by imaging individual RNA polymerases (RNAP) on T7 genomic DNA. PMID:19452448

  3. Qualitative identification of food materials by complex refractive index mapping in the terahertz range.

    PubMed

    Shin, Hee Jun; Choi, Sung-Wook; Ok, Gyeongsik

    2018-04-15

    We investigated the feasibility of qualitative food analysis using complex refractive index mapping of food materials in the terahertz (THz) frequency range. We studied optical properties such as the refractive index and absorption coefficient of food materials, including insects as foreign substances, from 0.2 to 1.3 THz. Although some food materials had a complex composition, their refractive indices were approximated with effective medium values, and therefore, they could be discriminated on the complex refractive index map. To demonstrate food quality inspection with THz imaging, we obtained THz reflective images and time-of-flight imaging of hidden defects in a sugar and milk powder matrix by using time domain THz pulses. Our results indicate that foreign substances can be clearly classified and detected according to the optical parameters of the foods and insects by using THz pulses. Copyright © 2017. Published by Elsevier Ltd.

  4. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed

    Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.

  5. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed Central

    Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262

  6. Frames as visual links between paintings and the museum environment: an analysis of statistical image properties

    PubMed Central

    Redies, Christoph; Groß, Franziska

    2013-01-01

    Frames provide a visual link between artworks and their surround. We asked how image properties change as an observer zooms out from viewing a painting alone, to viewing the painting with its frame and, finally, the framed painting in its museum environment (museum scene). To address this question, we determined three higher-order image properties that are based on histograms of oriented luminance gradients. First, complexity was measured as the sum of the strengths of all gradients in the image. Second, we determined the self-similarity of histograms of the orientated gradients at different levels of spatial analysis. Third, we analyzed how much gradient strength varied across orientations (anisotropy). Results were obtained for three art museums that exhibited paintings from three major periods of Western art. In all three museums, the mean complexity of the frames was higher than that of the paintings or the museum scenes. Frames thus provide a barrier of complexity between the paintings and their exterior. By contrast, self-similarity and anisotropy values of images of framed paintings were intermediate between the images of the paintings and the museum scenes, i.e., the frames provided a transition between the paintings and their surround. We also observed differences between the three museums that may reflect modified frame usage in different art periods. For example, frames in the museum for 20th century art tended to be smaller and less complex than in the two other two museums that exhibit paintings from earlier art periods (13th–18th century and 19th century, respectively). Finally, we found that the three properties did not depend on the type of reproduction of the paintings (photographs in museums, scans from books or images from the Google Art Project). To the best of our knowledge, this study is the first to investigate the relation between frames and paintings by measuring physically defined, higher-order image properties. PMID:24265625

  7. Network analysis of mesoscale optical recordings to assess regional, functional connectivity.

    PubMed

    Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H

    2015-10-01

    With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.

  8. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  9. CryoEM and image sorting for flexible protein/DNA complexes.

    PubMed

    Villarreal, Seth A; Stewart, Phoebe L

    2014-07-01

    Intrinsically disordered regions of proteins and conformational flexibility within complexes can be critical for biological function. However, disorder, flexibility, and heterogeneity often hinder structural analyses. CryoEM and single particle image processing techniques offer the possibility of imaging samples with significant flexibility. Division of particle images into more homogenous subsets after data acquisition can help compensate for heterogeneity within the sample. We present the utility of an eigenimage sorting analysis for examining two protein/DNA complexes with significant conformational flexibility and heterogeneity. These complexes are integral to the non-homologous end joining pathway, and are involved in the repair of double strand breaks of DNA. Both complexes include the DNA-dependent protein kinase catalytic subunit (DNA-PKcs) and biotinylated DNA with bound streptavidin, with one complex containing the Ku heterodimer. Initial 3D reconstructions of the two DNA-PKcs complexes resembled a cryoEM structure of uncomplexed DNA-PKcs without additional density clearly attributable to the remaining components. Application of eigenimage sorting allowed division of the DNA-PKcs complex datasets into more homogeneous subsets. This led to visualization of density near the base of the DNA-PKcs that can be attributed to DNA, streptavidin, and Ku. However, comparison of projections of the subset structures with 2D class averages indicated that a significant level of heterogeneity remained within each subset. In summary, image sorting methods allowed visualization of extra density near the base of DNA-PKcs, suggesting that DNA binds in the vicinity of the base of the molecule and potentially to a flexible region of DNA-PKcs. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Imaging flow cytometry for phytoplankton analysis.

    PubMed

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Synthesis and evaluation of gadolinium complexes based on PAMAM as MRI contrast agents.

    PubMed

    Yan, Guo-Ping; Hu, Bin; Liu, Mai-Li; Li, Li-Yun

    2005-03-01

    Diethylenetriaminepentaacetic acid (DTPA) and pyridoxamine (PM) were incorporated into the amine groups on the surface of ammonia-core poly(amidoamine) dendrimers (PAMAM, Generation 2.0-5.0) to obtain dendritic ligands. These dendritic ligands were reacted with gadolinium chloride to yield the corresponding dendritic gadolinium (Gd) complexes. The dendritic ligands and their gadolinium complexes were characterized by(1)HNMR, IR, UV and elemental analysis. Relaxivity studies showed that the dendritic gadolinium complexes possessed higher relaxation effectiveness compared with the clinically used Gd-DTPA. After administration of the dendritic gadolinium complexes (0.09 mmol kg(-1) ) to rats, magnetic resonance imaging of the liver indicated that the dendritic gadolinium complexes containing pyridoxamine groups enhanced the contrast of the MR images of the liver, provided prolonged intravascular duration and produced highly contrasted visualization of blood vessels.

  12. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  13. An innovative and shared methodology for event reconstruction using images in forensic science.

    PubMed

    Milliet, Quentin; Jendly, Manon; Delémont, Olivier

    2015-09-01

    This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  15. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  16. A Developmental Framework for Complex Plasmodesmata Formation Revealed by Large-Scale Imaging of the Arabidopsis Leaf Epidermis[W

    PubMed Central

    Fitzgibbon, Jessica; Beck, Martina; Zhou, Ji; Faulkner, Christine; Robatzek, Silke; Oparka, Karl

    2013-01-01

    Plasmodesmata (PD) form tubular connections that function as intercellular communication channels. They are essential for transporting nutrients and for coordinating development. During cytokinesis, simple PDs are inserted into the developing cell plate, while during wall extension, more complex (branched) forms of PD are laid down. We show that complex PDs are derived from existing simple PDs in a pattern that is accelerated when leaves undergo the sink–source transition. Complex PDs are inserted initially at the three-way junctions between epidermal cells but develop most rapidly in the anisocytic complexes around stomata. For a quantitative analysis of complex PD formation, we established a high-throughput imaging platform and constructed PDQUANT, a custom algorithm that detected cell boundaries and PD numbers in different wall faces. For anticlinal walls, the number of complex PDs increased with increasing cell size, while for periclinal walls, the number of PDs decreased. Complex PD insertion was accelerated by up to threefold in response to salicylic acid treatment and challenges with mannitol. In a single 30-min run, we could derive data for up to 11k PDs from 3k epidermal cells. This facile approach opens the door to a large-scale analysis of the endogenous and exogenous factors that influence PD formation. PMID:23371949

  17. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  18. SAR image change detection using watershed and spectral clustering

    NASA Astrophysics Data System (ADS)

    Niu, Ruican; Jiao, L. C.; Wang, Guiting; Feng, Jie

    2011-12-01

    A new method of change detection in SAR images based on spectral clustering is presented in this paper. Spectral clustering is employed to extract change information from a pair images acquired on the same geographical area at different time. Watershed transform is applied to initially segment the big image into non-overlapped local regions, leading to reduce the complexity. Experiments results and system analysis confirm the effectiveness of the proposed algorithm.

  19. Contrast improvement of terahertz images of thin histopathologic sections

    PubMed Central

    Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio

    2011-01-01

    We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast. PMID:21326635

  20. Contrast improvement of terahertz images of thin histopathologic sections.

    PubMed

    Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio

    2010-12-03

    We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast.

  1. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  2. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong

    2012-11-01

    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  3. Exploring hyperspectral imaging data sets with topological data analysis.

    PubMed

    Duponchel, Ludovic

    2018-02-13

    Analytical chemistry is rapidly changing. Indeed we acquire always more data in order to go ever further in the exploration of complex samples. Hyperspectral imaging has not escaped this trend. It quickly became a tool of choice for molecular characterisation of complex samples in many scientific domains. The main reason is that it simultaneously provides spectral and spatial information. As a result, chemometrics has provided many exploration tools (PCA, clustering, MCR-ALS …) well-suited for such data structure at early stage. However we are today facing a new challenge considering the always increasing number of pixels in the data cubes we have to manage. The idea is therefore to introduce a new paradigm of Topological Data Analysis in order explore hyperspectral imaging data sets highlighting its nice properties and specific features. With this paper, we shall also point out the fact that conventional chemometric methods are often based on variance analysis or simply impose a data model which implicitly defines the geometry of the data set. Thus we will show that it is not always appropriate in the framework of hyperspectral imaging data sets exploration. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Three-Dimensional Anatomic Evaluation of the Anterior Cruciate Ligament for Planning Reconstruction

    PubMed Central

    Hoshino, Yuichi; Kim, Donghwi; Fu, Freddie H.

    2012-01-01

    Anatomic study related to the anterior cruciate ligament (ACL) reconstruction surgery has been developed in accordance with the progress of imaging technology. Advances in imaging techniques, especially the move from two-dimensional (2D) to three-dimensional (3D) image analysis, substantially contribute to anatomic understanding and its application to advanced ACL reconstruction surgery. This paper introduces previous research about image analysis of the ACL anatomy and its application to ACL reconstruction surgery. Crucial bony landmarks for the accurate placement of the ACL graft can be identified by 3D imaging technique. Additionally, 3D-CT analysis of the ACL insertion site anatomy provides better and more consistent evaluation than conventional “clock-face” reference and roentgenologic quadrant method. Since the human anatomy has a complex three-dimensional structure, further anatomic research using three-dimensional imaging analysis and its clinical application by navigation system or other technologies is warranted for the improvement of the ACL reconstruction. PMID:22567310

  5. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  6. Content Analysis of Science Teacher Representations in Google Images

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2017-01-01

    Teacher images can impact numerous perceptions in educational settings, as well as through popular media. The portrayal of effective science teaching is especially challenging to specify, given the complex nature of science inquiry and other standards-based practices. The present study examined the litany of representations of science teachers…

  7. Visual analytics for semantic queries of TerraSAR-X image content

    NASA Astrophysics Data System (ADS)

    Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai

    2015-10-01

    With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain the image content using semantic terms and the relations between them answering questions such as what is the percentage of urban area in a region? or what is the distribution of water bodies in a city?

  8. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  9. Comparison of two freely available software packages for mass spectrometry imaging data analysis using brains from morphine addicted rats.

    PubMed

    Bodzon-Kulakowska, Anna; Marszalek-Grabska, Marta; Antolak, Anna; Drabik, Anna; Kotlinska, Jolanta H; Suder, Piotr

    Data analysis from mass spectrometry imaging (MSI) imaging experiments is a very complex task. Most of the software packages devoted to this purpose are designed by the mass spectrometer manufacturers and, thus, are not freely available. Laboratories developing their own MS-imaging sources usually do not have access to the commercial software, and they must rely on the freely available programs. The most recognized ones are BioMap, developed by Novartis under Interactive Data Language (IDL), and Datacube, developed by the Dutch Foundation for Fundamental Research of Matter (FOM-Amolf). These two systems were used here for the analysis of images received from rat brain tissues subjected to morphine influence and their capabilities were compared in terms of ease of use and the quality of obtained results.

  10. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  11. GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis

    NASA Astrophysics Data System (ADS)

    Nass, Andrea; van Gasselt, Stephan

    2015-04-01

    Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.

  12. Automated processing of zebrafish imaging data: a survey.

    PubMed

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

  13. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  14. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  15. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  16. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  17. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  18. Three-dimensional Bragg coherent diffraction imaging of an extended ZnO crystal.

    PubMed

    Huang, Xiaojing; Harder, Ross; Leake, Steven; Clark, Jesse; Robinson, Ian

    2012-08-01

    A complex three-dimensional quantitative image of an extended zinc oxide (ZnO) crystal has been obtained using Bragg coherent diffraction imaging integrated with ptychography. By scanning a 2.5 µm-long arm of a ZnO tetrapod across a 1.3 µm X-ray beam with fine step sizes while measuring a three-dimensional diffraction pattern at each scan spot, the three-dimensional electron density and projected displacement field of the entire crystal were recovered. The simultaneously reconstructed complex wavefront of the illumination combined with its coherence properties determined by a partial coherence analysis implemented in the reconstruction process provide a comprehensive characterization of the incident X-ray beam.

  19. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    PubMed

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.

  20. Audible sonar images generated with proprioception for target analysis.

    PubMed

    Kuc, Roman B

    2017-05-01

    Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.

  1. A novel underwater dam crack detection and classification approach based on sonar images

    PubMed Central

    Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min

    2017-01-01

    Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments. PMID:28640925

  2. A novel underwater dam crack detection and classification approach based on sonar images.

    PubMed

    Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min

    2017-01-01

    Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.

  3. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  4. Fractal Analysis of Brain Blood Oxygenation Level Dependent (BOLD) Signals from Children with Mild Traumatic Brain Injury (mTBI).

    PubMed

    Dona, Olga; Noseworthy, Michael D; DeMatteo, Carol; Connolly, John F

    2017-01-01

    Conventional imaging techniques are unable to detect abnormalities in the brain following mild traumatic brain injury (mTBI). Yet patients with mTBI typically show delayed response on neuropsychological evaluation. Because fractal geometry represents complexity, we explored its utility in measuring temporal fluctuations of brain resting state blood oxygen level dependent (rs-BOLD) signal. We hypothesized that there could be a detectable difference in rs-BOLD signal complexity between healthy subjects and mTBI patients based on previous studies that associated reduction in signal complexity with disease. Fifteen subjects (13.4 ± 2.3 y/o) and 56 age-matched (13.5 ± 2.34 y/o) healthy controls were scanned using a GE Discovery MR750 3T MRI and 32-channel RF-coil. Axial FSPGR-3D images were used to prescribe rs-BOLD (TE/TR = 35/2000ms), acquired over 6 minutes. Motion correction was performed and anatomical and functional images were aligned and spatially warped to the N27 standard atlas. Fractal analysis, performed on grey matter, was done by estimating the Hurst exponent using de-trended fluctuation analysis and signal summation conversion methods. Voxel-wise fractal dimension (FD) was calculated for every subject in the control group to generate mean and standard deviation maps for regional Z-score analysis. Voxel-wise validation of FD normality across controls was confirmed, and non-Gaussian voxels (3.05% over the brain) were eliminated from subsequent analysis. For each mTBI patient, regions where Z-score values were at least 2 standard deviations away from the mean (i.e. where |Z| > 2.0) were identified. In individual patients the frequently affected regions were amygdala (p = 0.02), vermis(p = 0.03), caudate head (p = 0.04), hippocampus(p = 0.03), and hypothalamus(p = 0.04), all previously reported as dysfunctional after mTBI, but based on group analysis. It is well known that the brain is best modeled as a complex system. Therefore a measure of complexity using rs-BOLD signal FD could provide an additional method to grade and monitor mTBI. Furthermore, this approach can be personalized thus providing unique patient specific assessment.

  5. Identification and evaluation of composition in food powder using point-scan Raman spectral imaging

    USDA-ARS?s Scientific Manuscript database

    This study used Raman spectral imaging coupled with self-modeling mixture analysis (SMA) for identification of three components mixed into a complex food powder mixture. Vanillin, melamine, and sugar were mixed together at 10 different concentration levels (spanning 1% to 10%, w/w) into powdered non...

  6. A new hyperchaotic map and its application for image encryption

    NASA Astrophysics Data System (ADS)

    Natiq, Hayder; Al-Saidi, N. M. G.; Said, M. R. M.; Kilicman, Adem

    2018-01-01

    Based on the one-dimensional Sine map and the two-dimensional Hénon map, a new two-dimensional Sine-Hénon alteration model (2D-SHAM) is hereby proposed. Basic dynamic characteristics of 2D-SHAM are studied through the following aspects: equilibria, Jacobin eigenvalues, trajectory, bifurcation diagram, Lyapunov exponents and sensitivity dependence test. The complexity of 2D-SHAM is investigated using Sample Entropy algorithm. Simulation results show that 2D-SHAM is overall hyperchaotic with the high complexity, and high sensitivity to its initial values and control parameters. To investigate its performance in terms of security, a new 2D-SHAM-based image encryption algorithm (SHAM-IEA) is also proposed. In this algorithm, the essential requirements of confusion and diffusion are accomplished, and the stochastic 2D-SHAM is used to enhance the security of encrypted image. The stochastic 2D-SHAM generates random values, hence SHAM-IEA can produce different encrypted images even with the same secret key. Experimental results and security analysis show that SHAM-IEA has strong capability to withstand statistical analysis, differential attack, chosen-plaintext and chosen-ciphertext attacks.

  7. [Research applications in digital radiology. Big data and co].

    PubMed

    Müller, H; Hanbury, A

    2016-02-01

    Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.

  8. Potential use of combining the diffusion equation with the free Shrödinger equation to improve the Optical Coherence Tomography image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Fernandez, Delia; Salinas, Harry M.; Somfai, Gabor; Puliafito, Carmen A.

    2006-03-01

    Optical coherence tomography (OCT) is a rapidly emerging medical imaging technology. In ophthalmology, OCT is a powerful tool because it enables visualization of the cross sectional structure of the retina and anterior eye with higher resolutions than any other non-invasive imaging modality. Furthermore, OCT image information can be quantitatively analyzed, enabling objective assessment of features such as macular edema and diabetes retinopathy. We present specific improvements in the quantitative analysis of the OCT system, by combining the diffusion equation with the free Shrödinger equation. In such formulation, important features of the image can be extracted by extending the analysis from the real axis to the complex domain. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the OCT system.

  9. Genomic Diversity and the Microenvironment as Drivers of Progression in DCIS

    DTIC Science & Technology

    2016-10-01

    been acquiring new skills in medical image analysis and learning about the complexities of breast cancer diagnosis. How were the results disseminated...on the Aim 3 results to the SPIE Medical Imaging Conference to be held in February 2017. If accepted, those will each be published in the form of a... image . This will complete Aim 3a. We will continue work on Aim 3b to develop imaging -only predictive models using the proposed machine learning

  10. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.

    PubMed

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-10-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.

  11. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis

    PubMed Central

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-01-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638

  12. Directionality analysis on functional magnetic resonance imaging during motor task using Granger causality.

    PubMed

    Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Deuschl, G; Raethjen, J; Heute, U; Muthuraman, M

    2012-01-01

    Directionality analysis of signals originating from different parts of brain during motor tasks has gained a lot of interest. Since brain activity can be recorded over time, methods of time series analysis can be applied to medical time series as well. Granger Causality is a method to find a causal relationship between time series. Such causality can be referred to as a directional connection and is not necessarily bidirectional. The aim of this study is to differentiate between different motor tasks on the basis of activation maps and also to understand the nature of connections present between different parts of the brain. In this paper, three different motor tasks (finger tapping, simple finger sequencing, and complex finger sequencing) are analyzed. Time series for each task were extracted from functional magnetic resonance imaging (fMRI) data, which have a very good spatial resolution and can look into the sub-cortical regions of the brain. Activation maps based on fMRI images show that, in case of complex finger sequencing, most parts of the brain are active, unlike finger tapping during which only limited regions show activity. Directionality analysis on time series extracted from contralateral motor cortex (CMC), supplementary motor area (SMA), and cerebellum (CER) show bidirectional connections between these parts of the brain. In case of simple finger sequencing and complex finger sequencing, the strongest connections originate from SMA and CMC, while connections originating from CER in either direction are the weakest ones in magnitude during all paradigms.

  13. A Graphical User Interface for Software-assisted Tracking of Protein Concentration in Dynamic Cellular Protrusions.

    PubMed

    Saha, Tanumoy; Rathmann, Isabel; Galic, Milos

    2017-07-11

    Filopodia are dynamic, finger-like cellular protrusions associated with migration and cell-cell communication. In order to better understand the complex signaling mechanisms underlying filopodial initiation, elongation and subsequent stabilization or retraction, it is crucial to determine the spatio-temporal protein activity in these dynamic structures. To analyze protein function in filopodia, we recently developed a semi-automated tracking algorithm that adapts to filopodial shape-changes, thus allowing parallel analysis of protrusion dynamics and relative protein concentration along the whole filopodial length. Here, we present a detailed step-by-step protocol for optimized cell handling, image acquisition and software analysis. We further provide instructions for the use of optional features during image analysis and data representation, as well as troubleshooting guidelines for all critical steps along the way. Finally, we also include a comparison of the described image analysis software with other programs available for filopodia quantification. Together, the presented protocol provides a framework for accurate analysis of protein dynamics in filopodial protrusions using image analysis software.

  14. 3D Reconstruction of Frozen Plant Tissue: a unique histological analysis to image post-freeze responses

    USDA-ARS?s Scientific Manuscript database

    Winter hardiness in plants is the result of a complex interaction between genes, the tissue where those genes are expressed and the environment. The light microscope is a valuable tool to understand this complexity which will ultimately help researchers improve the tolerance of plants to freezing st...

  15. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  16. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  17. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  18. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  19. Model-based image analysis of a tethered Brownian fibre for shear stress sensing

    PubMed Central

    2017-01-01

    The measurement of fluid dynamic shear stress acting on a biologically relevant surface is a challenging problem, particularly in the complex environment of, for example, the vasculature. While an experimental method for the direct detection of wall shear stress via the imaging of a synthetic biology nanorod has recently been developed, the data interpretation so far has been limited to phenomenological random walk modelling, small-angle approximation, and image analysis techniques which do not take into account the production of an image from a three-dimensional subject. In this report, we develop a mathematical and statistical framework to estimate shear stress from rapid imaging sequences based firstly on stochastic modelling of the dynamics of a tethered Brownian fibre in shear flow, and secondly on a novel model-based image analysis, which reconstructs fibre positions by solving the inverse problem of image formation. This framework is tested on experimental data, providing the first mechanistically rational analysis of the novel assay. What follows further develops the established theory for an untethered particle in a semi-dilute suspension, which is of relevance to, for example, the study of Brownian nanowires without flow, and presents new ideas in the field of multi-disciplinary image analysis. PMID:29212755

  20. Computerized analysis of sonograms for the detection of breast lesions

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Giger, Maryellen L.; Horsch, Karla; Vyborny, Carl J.

    2002-05-01

    With a renewed interest in using non-ionizing radiation for the screening of high risk women, there is a clear role for a computerized detection aid in ultrasound. Thus, we are developing a computerized detection method for the localization of lesions on breast ultrasound images. The computerized detection scheme utilizes two methods. Firstly, a radial gradient index analysis is used to distinguish potential lesions from normal parenchyma. Secondly, an image skewness analysis is performed to identify posterior acoustic shadowing. We analyzed 400 cases (757 images) consisting of complex cysts, solid benign lesions, and malignant lesions. The detection method yielded an overall sensitivity of 95% by image, and 99% by case at a false-positive rate of 0.94 per image. In 51% of all images, only the lesion itself was detected, while in 5% of the images only the shadowing was identified. For malignant lesions these numbers were 37% and 9%, respectively. In summary, we have developed a computer detection method for lesions on ultrasound images of the breast, which may ultimately aid in breast cancer screening.

  1. Sensor image prediction techniques

    NASA Astrophysics Data System (ADS)

    Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.

    1981-02-01

    The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.

  2. Forest cover type analysis of New England forests using innovative WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Kovacs, Jenna M.

    For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.

  3. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    NASA Astrophysics Data System (ADS)

    Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.

    2016-10-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.

  4. A new background subtraction method for Western blot densitometry band quantification through image analysis software.

    PubMed

    Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier

    2018-06-01

    Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Microvascular Autonomic Composites

    DTIC Science & Technology

    2012-01-06

    thermogravimetric analysis (TGA) was employed. The double wall allowed for increased thermal stability of the microcapsules, which was...fluorescent nanoparticles (Berfield et al. 2006). Digital Image Correlation (DIC) is a data analysis method, which applies a mathematical...Theme IV: Experimental Assessment & Analysis 2.4.1 Optical diagnostics for complex microfluidic systems pg. 50 2.4.2 Fluorescent thermometry

  6. Imaging Fluorescent Combustion Species in Gas Turbine Flame Tubes: On Complexities in Real Systems

    NASA Technical Reports Server (NTRS)

    Hicks, Y. R.; Locke, R. J.; Anderson, R. C.; Zaller, M.; Schock, H. J.

    1997-01-01

    Planar laser-induced fluorescence (PLIF) is used to visualize the flame structure via OH, NO, and fuel imaging in kerosene- burning gas turbine combustor flame tubes. When compared to simple gaseous hydrocarbon flames and hydrogen flames, flame tube testing complexities include spectral interferences from large fuel fragments, unknown turbulence interactions, high pressure operation, and the concomitant need for windows and remote operation. Complications of these and other factors as they apply to image analysis are considered. Because both OH and gas turbine engine fuels (commercial and military) can be excited and detected using OH transition lines, a narrowband and a broadband detection scheme are compared and the benefits and drawbacks of each method are examined.

  7. MIiSR: Molecular Interactions in Super-Resolution Imaging Enables the Analysis of Protein Interactions, Dynamics and Formation of Multi-protein Structures.

    PubMed

    Caetano, Fabiana A; Dirk, Brennan S; Tam, Joshua H K; Cavanagh, P Craig; Goiko, Maria; Ferguson, Stephen S G; Pasternak, Stephen H; Dikeakos, Jimmy D; de Bruyn, John R; Heit, Bryan

    2015-12-01

    Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.

  8. DfM requirements and ROI analysis for system-on-chip

    NASA Astrophysics Data System (ADS)

    Balasinski, Artur

    2005-11-01

    DfM (Design-for-Manufacturability) has become staple requirement beyond 100 nm technology node for efficient generation of mask data, cost reduction, and optimal circuit performance. Layout pattern has to comply to many requirements pertaining to database structure and complexity, suitability for image enhancement by the optical proximity correction, and mask data pattern density and distribution over the image field. These requirements are of particular complexity for Systems-on-Chip (SoC). A number of macro-, meso-, and microscopic effects such as reticle macroloading, planarization dishing, and pattern bridging or breaking would compromise fab yield, device performance, or both. In order to determine the optimal set of DfM rules applicable to the particular designs, Return-on-Investment and Failure Mode and Effect Analysis (FMEA) are proposed.

  9. Measuring the complexity of design in real-time imaging software

    NASA Astrophysics Data System (ADS)

    Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.

    2007-02-01

    Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.

  10. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  11. Synthetic Biomaterials to Rival Nature's Complexity-a Path Forward with Combinatorics, High-Throughput Discovery, and High-Content Analysis.

    PubMed

    Zhang, Douglas; Lee, Junmin; Kilian, Kristopher A

    2017-10-01

    Cells in tissue receive a host of soluble and insoluble signals in a context-dependent fashion, where integration of these cues through a complex network of signal transduction cascades will define a particular outcome. Biomaterials scientists and engineers are tasked with designing materials that can at least partially recreate this complex signaling milieu towards new materials for biomedical applications. In this progress report, recent advances in high throughput techniques and high content imaging approaches that are facilitating the discovery of efficacious biomaterials are described. From microarrays of synthetic polymers, peptides and full-length proteins, to designer cell culture systems that present multiple biophysical and biochemical cues in tandem, it is discussed how the integration of combinatorics with high content imaging and analysis is essential to extracting biologically meaningful information from large scale cellular screens to inform the design of next generation biomaterials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less

  13. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy.

    PubMed

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa

    2016-08-01

    For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.

  14. Method for enhancing cell penetration of Gd3+-based MRI contrast agents by conjugation with hydrophobic fluorescent dyes.

    PubMed

    Yamane, Takehiro; Hanaoka, Kenjiro; Muramatsu, Yasuaki; Tamura, Keita; Adachi, Yusuke; Miyashita, Yasushi; Hirata, Yasunobu; Nagano, Tetsuo

    2011-11-16

    Gadolinium ion (Gd(3+)) complexes are commonly used as magnetic resonance imaging (MRI) contrast agents to enhance signals in T(1)-weighted MR images. Recently, several methods to achieve cell-permeation of Gd(3+) complexes have been reported, but more general and efficient methodology is needed. In this report, we describe a novel method to achieve cell permeation of Gd(3+) complexes by using hydrophobic fluorescent dyes as a cell-permeability-enhancing unit. We synthesized Gd(3+) complexes conjugated with boron dipyrromethene (BDP-Gd) and Cy7 dye (Cy7-Gd), and showed that these conjugates can be introduced efficiently into cells. To examine the relationship between cell permeability and dye structure, we further synthesized a series of Cy7-Gd derivatives. On the basis of MR imaging, flow cytometry, and ICP-MS analysis of cells loaded with Cy7-Gd derivatives, highly hydrophobic and nonanionic dyes were effective for enhancing cell permeation of Gd(3+) complexes. Furthermore, the behavior of these Cy7-Gd derivatives was examined in mice. Thus, conjugation of hydrophobic fluorescent dyes appears to be an effective approach to improve the cell permeability of Gd(3+) complexes, and should be applicable for further development of Gd(3+)-based MRI contrast agents.

  15. Diffusion tensor imaging in children with tuberous sclerosis complex: tract-based spatial statistics assessment of brain microstructural changes.

    PubMed

    Zikou, Anastasia K; Xydis, Vasileios G; Astrakas, Loukas G; Nakou, Iliada; Tzarouchi, Loukia C; Tzoufi, Meropi; Argyropoulou, Maria I

    2016-07-01

    There is evidence of microstructural changes in normal-appearing white matter of patients with tuberous sclerosis complex. To evaluate major white matter tracts in children with tuberous sclerosis complex using tract-based spatial statistics diffusion tensor imaging (DTI) analysis. Eight children (mean age ± standard deviation: 8.5 ± 5.5 years) with an established diagnosis of tuberous sclerosis complex and 8 age-matched controls were studied. The imaging protocol consisted of T1-weighted high-resolution 3-D spoiled gradient-echo sequence and a spin-echo, echo-planar diffusion-weighted sequence. Differences in the diffusion indices were evaluated using tract-based spatial statistics. Tract-based spatial statistics showed increased axial diffusivity in the children with tuberous sclerosis complex in the superior and anterior corona radiata, the superior longitudinal fascicle, the inferior fronto-occipital fascicle, the uncinate fascicle and the anterior thalamic radiation. No significant differences were observed in fractional anisotropy, mean diffusivity and radial diffusivity between patients and control subjects. No difference was found in the diffusion indices between the baseline and follow-up examination in the patient group. Patients with tuberous sclerosis complex have increased axial diffusivity in major white matter tracts, probably related to reduced axonal integrity.

  16. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  17. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  18. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  19. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  20. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    NASA Astrophysics Data System (ADS)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  1. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  2. Fast linear feature detection using multiple directional non-maximum suppression.

    PubMed

    Sun, C; Vallotton, P

    2009-05-01

    The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.

  3. Complex dark-field contrast and its retrieval in x-ray phase contrast imaging implemented with Talbot interferometry.

    PubMed

    Yang, Yi; Tang, Xiangyang

    2014-10-01

    Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be retrieved for imaging with a method based on the phase-attenuation duality. The analytic formulae derived in this work to characterize the complex dark-field contrast in x-ray phase contrast imaging method implemented with Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive biomedical applications.

  4. Research of processes of reception and analysis of dynamic digital medical images in hardware/software complexes used for diagnostics and treatment of cardiovascular diseases

    NASA Astrophysics Data System (ADS)

    Karmazikov, Y. V.; Fainberg, E. M.

    2005-06-01

    Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.

  5. Relevant Scatterers Characterization in SAR Images

    NASA Astrophysics Data System (ADS)

    Chaabouni, Houda; Datcu, Mihai

    2006-11-01

    Recognizing scenes in a single look meter resolution Synthetic Aperture Radar (SAR) images, requires the capability to identify relevant signal signatures in condition of variable image acquisition geometry, arbitrary objects poses and configurations. Among the methods to detect relevant scatterers in SAR images, we can mention the internal coherence. The SAR spectrum splitted in azimuth generates a series of images which preserve high coherence only for particular object scattering. The detection of relevant scatterers can be done by correlation study or Independent Component Analysis (ICA) methods. The present article deals with the state of the art for SAR internal correlation analysis and proposes further extensions using elements of inference based on information theory applied to complex valued signals. The set of azimuth looks images is analyzed using mutual information measures and an equivalent channel capacity is derived. The localization of the "target" requires analysis in a small image window, thus resulting in imprecise estimation of the second order statistics of the signal. For a better precision, a Hausdorff measure is introduced. The method is applied to detect and characterize relevant objects in urban areas.

  6. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  7. Automated image analysis of placental villi and syncytial knots in histological sections.

    PubMed

    Kidron, Debora; Vainer, Ifat; Fisher, Yael; Sharony, Reuven

    2017-05-01

    Delayed villous maturation and accelerated villous maturation diagnosed in histologic sections are morphologic manifestations of pathophysiological conditions. The inter-observer agreement among pathologists in assessing these conditions is moderate at best. We investigated whether automated image analysis of placental villi and syncytial knots could improve standardization in diagnosing these conditions. Placentas of antepartum fetal death at or near term were diagnosed as normal, delayed or accelerated villous maturation. Histologic sections of 5 cases per group were photographed at ×10 magnification. Automated image analysis of villi and syncytial knots was performed, using ImageJ public domain software. Analysis of hundreds of histologic images was carried out within minutes on a personal computer, using macro commands. Compared to normal placentas, villi from delayed maturation were larger and fewer, with fewer and smaller syncytial knots. Villi from accelerated maturation were smaller. The data were further analyzed according to horizontal placental zones and groups of villous size. Normal placentas can be discriminated from placentas of delayed or accelerated villous maturation using automated image analysis. Automated image analysis of villi and syncytial knots is not equivalent to interpretation by the human eye. Each method has advantages and disadvantages in assessing the 2-dimensional histologic sections representing the complex, 3-dimensional villous tree. Image analysis of placentas provides quantitative data that might help in standardizing and grading of placentas for diagnostic and research purposes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Analysis of Hyoid-Larynx Complex Using 3D Geometric Morphometrics.

    PubMed

    Loth, Anthony; Corny, Julien; Santini, Laure; Dahan, Laurie; Dessi, Patrick; Adalian, Pascal; Fakhry, Nicolas

    2015-06-01

    The aim of this study was to obtain a quantitative anatomical description of the hyoid bone-larynx complex using modern 3D reconstruction tools. The study was conducted on 104 bones from CT scan images of living adult subjects. Three-dimensional reconstructions were created from CT scan images using AVIZO 6.2 software package. A study of this complex was carried out using metric and morphological analyses. Characteristics of the hyoid bone and larynx were highly heterogeneous and were closely linked with the sex, height, and weight of the individuals. Height and width of larynx were significantly greater in men than in women (24.99 vs. 17.3 mm, p ≤ 0.05 and 46.75 vs. 41.07, p ≤ 0.05), whereas the thyroid angle was larger in females (81.12° vs. 74.48°, p ≤ 0.05). There was a significant correlation between the height and weight of subjects and different measurements of the hyoid-larynx complex. (Pearson's coefficient correlation r = 0.42, p ≤ 0.05 between the height of thyroid ala and the height of subjects and r = 0.1, p ≤ 0.05 between the height of thyroid ala and the weight of subjects). Shape and size analysis of the hyoid-larynx complex showed the existence of a significant sexual dimorphism and high interindividual heterogeneity depending to patient morphology. These results encourage us to go further with functional and imaging correlations.

  9. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  10. Quantification of mitral valve morphology with three-dimensional echocardiography--can measurement lead to better management?

    PubMed

    Lee, Alex Pui-Wai; Fang, Fang; Jin, Chun-Na; Kam, Kevin Ka-Ho; Tsui, Gary K W; Wong, Kenneth K Y; Looi, Jen-Li; Wong, Randolph H L; Wan, Song; Sun, Jing Ping; Underwood, Malcolm J; Yu, Cheuk-Man

    2014-01-01

    The mitral valve (MV) has complex 3-dimensional (3D) morphology and motion. Advance in real-time 3D echocardiography (RT3DE) has revolutionized clinical imaging of the MV by providing clinicians with realistic visualization of the valve. Thus far, RT3DE of the MV structure and dynamics has adopted an approach that depends largely on subjective and qualitative interpretation of the 3D images of the valve, rather than objective and reproducible measurement. RT3DE combined with image-processing computer techniques provides precise segmentation and reliable quantification of the complex 3D morphology and rapid motion of the MV. This new approach to imaging may provide additional quantitative descriptions that are useful in diagnostic and therapeutic decision-making. Quantitative analysis of the MV using RT3DE has increased our understanding of the pathologic mechanism of degenerative, ischemic, functional, and rheumatic MV disease. Most recently, 3D morphologic quantification has entered into clinical use to provide more accurate diagnosis of MV disease and for planning surgery and transcatheter interventions. Current limitations of this quantitative approach to MV imaging include labor-intensiveness during image segmentation and lack of a clear definition of the clinical significance of many of the morphologic parameters. This review summarizes the current development and applications of quantitative analysis of the MV morphology using RT3DE.

  11. (Machine-)Learning to analyze in vivo microscopy: Support vector machines.

    PubMed

    Wang, Michael F Z; Fernandez-Gonzalez, Rodrigo

    2017-11-01

    The development of new microscopy techniques for super-resolved, long-term monitoring of cellular and subcellular dynamics in living organisms is revealing new fundamental aspects of tissue development and repair. However, new microscopy approaches present several challenges. In addition to unprecedented requirements for data storage, the analysis of high resolution, time-lapse images is too complex to be done manually. Machine learning techniques are ideally suited for the (semi-)automated analysis of multidimensional image data. In particular, support vector machines (SVMs), have emerged as an efficient method to analyze microscopy images obtained from animals. Here, we discuss the use of SVMs to analyze in vivo microscopy data. We introduce the mathematical framework behind SVMs, and we describe the metrics used by SVMs and other machine learning approaches to classify image data. We discuss the influence of different SVM parameters in the context of an algorithm for cell segmentation and tracking. Finally, we describe how the application of SVMs has been critical to study protein localization in yeast screens, for lineage tracing in C. elegans, or to determine the developmental stage of Drosophila embryos to investigate gene expression dynamics. We propose that SVMs will become central tools in the analysis of the complex image data that novel microscopy modalities have made possible. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    PubMed

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  13. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    NASA Astrophysics Data System (ADS)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  14. Healthcare provider and patient perspectives on diagnostic imaging investigations.

    PubMed

    Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A

    2015-05-20

    Much has been written about the patient-centred approach in doctor-patient consultations. Little is known about interactions and communication processes regarding healthcare providers' and patients' perspectives on expectations and experiences of diagnostic imaging investigations within the medical encounter. Patients journey through the health system from the point of referral to the imaging investigation itself and then to the post-imaging consultation. AIM AND SETTING: To explore healthcare provider and patient perspectives on interaction and communication processes during diagnostic imaging investigations as part of their clinical journey through a healthcare complex. A qualitative study was conducted, with two phases of data collection. Twenty-four patients were conveniently selected at a public district hospital complex and were followed throughout their journey in the hospital system, from admission to discharge. The second phase entailed focus group interviews conducted with providers in the district hospital and adjacent academic hospital (medical officers and family physicians, nurses, radiographers, radiology consultants and registrars). Two main themes guided our analysis: (1) provider perspectives; and (2) patient dispositions and reactions. Golden threads that cut across these themes are interactions and communication processes in the context of expectations, experiences of the imaging investigations and the outcomes thereof. Insights from this study provide a better understanding of the complexity of the processes and interactions between providers and patients during the imaging investigations conducted as part of their clinical pathway. The interactions and communication processes are provider-patient centred when a referral for a diagnostic imaging investigation is included.

  15. Light Microscopy at Maximal Precision

    NASA Astrophysics Data System (ADS)

    Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.

    2017-10-01

    Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  16. Student Learning about Biomolecular Self-Assembly Using Two Different External Representations

    PubMed Central

    Höst, Gunnar E.; Larsson, Caroline; Olson, Arthur; Tibell, Lena A. E.

    2013-01-01

    Self-assembly is the fundamental but counterintuitive principle that explains how ordered biomolecular complexes form spontaneously in the cell. This study investigated the impact of using two external representations of virus self-assembly, an interactive tangible three-dimensional model and a static two-dimensional image, on student learning about the process of self-assembly in a group exercise. A conceptual analysis of self-assembly into a set of facets was performed to support study design and analysis. Written responses were collected in a pretest/posttest experimental design with 32 Swedish university students. A quantitative analysis of close-ended items indicated that the students improved their scores between pretest and posttest, with no significant difference between the conditions (tangible model/image). A qualitative analysis of an open-ended item indicated students were unfamiliar with self-assembly prior to the study. Students in the tangible model condition used the facets of self-assembly in their open-ended posttest responses more frequently than students in the image condition. In particular, it appears that the dynamic properties of the tangible model may support student understanding of self-assembly in terms of the random and reversible nature of molecular interactions. A tentative difference was observed in response complexity, with more multifaceted responses in the tangible model condition. PMID:24006395

  17. Phaedra, a protocol-driven system for analysis and validation of high-content imaging and flow cytometry.

    PubMed

    Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel

    2012-04-01

    High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.

  18. Student learning about biomolecular self-assembly using two different external representations.

    PubMed

    Höst, Gunnar E; Larsson, Caroline; Olson, Arthur; Tibell, Lena A E

    2013-01-01

    Self-assembly is the fundamental but counterintuitive principle that explains how ordered biomolecular complexes form spontaneously in the cell. This study investigated the impact of using two external representations of virus self-assembly, an interactive tangible three-dimensional model and a static two-dimensional image, on student learning about the process of self-assembly in a group exercise. A conceptual analysis of self-assembly into a set of facets was performed to support study design and analysis. Written responses were collected in a pretest/posttest experimental design with 32 Swedish university students. A quantitative analysis of close-ended items indicated that the students improved their scores between pretest and posttest, with no significant difference between the conditions (tangible model/image). A qualitative analysis of an open-ended item indicated students were unfamiliar with self-assembly prior to the study. Students in the tangible model condition used the facets of self-assembly in their open-ended posttest responses more frequently than students in the image condition. In particular, it appears that the dynamic properties of the tangible model may support student understanding of self-assembly in terms of the random and reversible nature of molecular interactions. A tentative difference was observed in response complexity, with more multifaceted responses in the tangible model condition.

  19. Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets

    PubMed Central

    Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626

  20. Diagnosis of abnormal biliary copper excretion by positron emission tomography with targeting of 64Copper-asialofetuin complex in LEC rat model of Wilson’s disease

    PubMed Central

    Bahde, Ralf; Kapoor, Sorabh; Bhargava, Kuldeep K; Palestro, Christopher J; Gupta, Sanjeev

    2014-01-01

    Identification by molecular imaging of key processes in handling of transition state metals, such as copper (Cu), will be of considerable clinical value. For instance, the ability to diagnose Wilson’s disease with molecular imaging by identifying copper excretion in an ATP7B-dependent manner will be very significant. To develop highly effective diagnostic approaches, we hypothesized that targeting of radiocopper via the asialoglycoprotein receptor will be appropriate for positron emission tomography, and examined this approach in a rat model of Wilson’s disease. After complexing 64Cu to asialofetuin we studied handling of this complex compared with 64Cu in healthy LEA rats and diseased homozygous LEC rats lacking ATP7B and exhibiting hepatic copper toxicosis. We analyzed radiotracer clearance from blood, organ uptake, and biliary excretion, including sixty minute dynamic positron emission tomography recordings. In LEA rats, 64Cu-asialofetuin was better cleared from blood followed by liver uptake and greater biliary excretion than 64Cu. In LEC rats, 64Cu-asialofetuin activity cleared even more rapidly from blood followed by greater uptake in liver, but neither 64Cu-asialofetuin nor 64Cu appeared in bile. Image analysis demonstrated rapid visualization of liver after 64Cu-asialofetuin administration followed by decreased liver activity in LEA rats while liver activity progressively increased in LEC rats. Image analysis resolved this difference in hepatic activity within one hour. We concluded that 64Cu-asialofetuin complex was successfully targeted to the liver and radiocopper was then excreted into bile in an ATP7B-dependent manner. Therefore, hepatic targeting of radiocopper will be appropriate for improving molecular diagnosis and for developing drug/cell/gene therapies in Wilson’s disease. PMID:25250203

  1. Automated image analysis for quantitative fluorescence in situ hybridization with environmental samples.

    PubMed

    Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L

    2007-05-01

    When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.

  2. MANTiS: a program for the analysis of X-ray spectromicroscopy data.

    PubMed

    Lerotic, Mirna; Mak, Rachel; Wirick, Sue; Meirer, Florian; Jacobsen, Chris

    2014-09-01

    Spectromicroscopy combines spectral data with microscopy, where typical datasets consist of a stack of images taken across a range of energies over a microscopic region of the sample. Manual analysis of these complex datasets can be time-consuming, and can miss the important traits in the data. With this in mind we have developed MANTiS, an open-source tool developed in Python for spectromicroscopy data analysis. The backbone of the package involves principal component analysis and cluster analysis, classifying pixels according to spectral similarity. Our goal is to provide a data analysis tool which is comprehensive, yet intuitive and easy to use. MANTiS is designed to lead the user through the analysis using story boards that describe each step in detail so that both experienced users and beginners are able to analyze their own data independently. These capabilities are illustrated through analysis of hard X-ray imaging of iron in Roman ceramics, and soft X-ray imaging of a malaria-infected red blood cell.

  3. Content Recognition and Context Modeling for Document Analysis and Retrieval

    ERIC Educational Resources Information Center

    Zhu, Guangyu

    2009-01-01

    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…

  4. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  5. Ruby-Helix: an implementation of helical image processing based on object-oriented scripting language.

    PubMed

    Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide

    2007-01-01

    Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.

  6. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery.

    PubMed

    Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-07-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.

  7. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery

    PubMed Central

    Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  8. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  9. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  10. A novel encryption scheme for high-contrast image data in the Fresnelet domain

    PubMed Central

    Bibi, Nargis; Farwa, Shabieh; Jahngir, Adnan; Usman, Muhammad

    2018-01-01

    In this paper, a unique and more distinctive encryption algorithm is proposed. This is based on the complexity of highly nonlinear S box in Flesnelet domain. The nonlinear pattern is transformed further to enhance the confusion in the dummy data using Fresnelet technique. The security level of the encrypted image boosts using the algebra of Galois field in Fresnelet domain. At first level, the Fresnelet transform is used to propagate the given information with desired wavelength at specified distance. It decomposes given secret data into four complex subbands. These complex sub-bands are separated into two components of real subband data and imaginary subband data. At second level, the net subband data, produced at the first level, is deteriorated to non-linear diffused pattern using the unique S-box defined on the Galois field F28. In the diffusion process, the permuted image is substituted via dynamic algebraic S-box substitution. We prove through various analysis techniques that the proposed scheme enhances the cipher security level, extensively. PMID:29608609

  11. MR arthrography in glenohumeral instability.

    PubMed

    Van der Woude, H J; Vanhoenacker, F M

    2007-01-01

    The impact of accurate imaging in the work-up of patients with glenohumeral instability is high. Results of imaging may directly influence the surgeon's strategy to perform an arthroscopic or open treatment for (recurrent) instability. Magnetic resonance (MR) imaging, and MR arthrography in particular, is the optimal technique to detect, localize and characterize injuries of the capsular-labrum complex. Besides TI-weighted sequences with fat suppression in axial, oblique sagital and coronal directions, an additional series in abduction and exoroation position is highly advocated. This ABER series optimally depicts abnormalities of the inferior capsular-labrum complex and partial undersurface tears of the spinatus tendons. Knowledge of different anatomical variants that may mimic labral tears and of variants of the classic Bankart lesion are useful in the analysis of shoulder MR arthrograms in patients with glenohumeral instability.

  12. Separation of Gd-humic complexes and Gd-based magnetic resonance imaging contrast agent in river water with QAE-Sephadex A-25 for the fractionation analysis.

    PubMed

    Matsumiya, Hiroaki; Inoue, Hiroto; Hiraide, Masataka

    2014-10-01

    Gadolinium complexed with naturally occurring, negatively charged humic substances (humic and fulvic acids) was collected from 500 mL of sample solution onto a column packed with 150 mg of a strongly basic anion-exchanger (QAE-Sephadex A-25). A Gd-based magnetic resonance imaging contrast agent (diethylenetriamine-N,N,N',N″,N″-pentaacetato aquo gadolinium(III), Gd-DTPA(2-)) was simultaneously collected on the same column. The Gd-DTPA complex was desorbed by anion-exchange with 50mM tetramethylammonium sulfate, leaving the Gd-humic complexes on the column. The Gd-humic complexes were subsequently dissociated with 1M nitric acid to desorb the humic fraction of Gd. The two-step desorption with small volumes of the eluting agents allowed the 100-fold preconcentration for the fractionation analysis of Gd at low ng L(-1) levels by inductively coupled plasma-mass spectrometry (ICP-MS). On the other hand, Gd(III) neither complexed with humic substances nor DTPA, i.e., free species, was not sorbed on the column. The free Gd in the effluent was preconcentrated 100-fold by a conventional solid-phase extraction with an iminodiacetic acid-type chelating resin and determined by ICP-MS. The proposed analytical fractionation method was applied to river water samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  14. Imaging Complex Protein Metabolism in Live Organisms by Stimulated Raman Scattering Microscopy with Isotope Labeling

    PubMed Central

    2016-01-01

    Protein metabolism, consisting of both synthesis and degradation, is highly complex, playing an indispensable regulatory role throughout physiological and pathological processes. Over recent decades, extensive efforts, using approaches such as autoradiography, mass spectrometry, and fluorescence microscopy, have been devoted to the study of protein metabolism. However, noninvasive and global visualization of protein metabolism has proven to be highly challenging, especially in live systems. Recently, stimulated Raman scattering (SRS) microscopy coupled with metabolic labeling of deuterated amino acids (D-AAs) was demonstrated for use in imaging newly synthesized proteins in cultured cell lines. Herein, we significantly generalize this notion to develop a comprehensive labeling and imaging platform for live visualization of complex protein metabolism, including synthesis, degradation, and pulse–chase analysis of two temporally defined populations. First, the deuterium labeling efficiency was optimized, allowing time-lapse imaging of protein synthesis dynamics within individual live cells with high spatial–temporal resolution. Second, by tracking the methyl group (CH3) distribution attributed to pre-existing proteins, this platform also enables us to map protein degradation inside live cells. Third, using two subsets of structurally and spectroscopically distinct D-AAs, we achieved two-color pulse–chase imaging, as demonstrated by observing aggregate formation of mutant hungtingtin proteins. Finally, going beyond simple cell lines, we demonstrated the imaging ability of protein synthesis in brain tissues, zebrafish, and mice in vivo. Hence, the presented labeling and imaging platform would be a valuable tool to study complex protein metabolism with high sensitivity, resolution, and biocompatibility for a broad spectrum of systems ranging from cells to model animals and possibly to humans. PMID:25560305

  15. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    PubMed

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or operation planning is a complex interdisciplinary process. Image computing methods enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  16. Lunar sample analysis

    NASA Technical Reports Server (NTRS)

    Housley, R. M.

    1983-01-01

    The evolution of the lunar regolith under solar wind and micrometeorite bombardment is discussed as well as the size distribution of ultrafine iron in lunar soil. The most important characteristics of complex graphite, sulfide, arsenide, palladium, and platinum mineralization in a pegmatoid pyroxenite of the Stillwater Complex in Montana are examined. Oblique reflected light micrographs and backscattered electron SEM images of the graphite associations are included.

  17. Palatal Complexity Revisited: An Electropalatographic Analysis of /image omitted/ in Brazilian Portuguese with Comparison to Peninsular Spanish

    ERIC Educational Resources Information Center

    Shosted, Ryan; Hualde, Jose Ignacio; Scarpace, Daniel

    2012-01-01

    Are palatal consonants articulated by multiple tongue gestures (coronal and dorsal) or by a single gesture that brings the tongue into contact with the palate at several places of articulation? The lenition of palatal consonants (resulting in approximants) has been presented as evidence that palatals are simple, not complex: When reduced, they do…

  18. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then,more » fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.« less

  19. Image analysis of the blood cells for cytomorphodiagnostics and control of the effectiveness treatment

    NASA Astrophysics Data System (ADS)

    Zhukotsky, Alexander V.; Kogan, Emmanuil M.; Kopylov, Victor F.; Marchenko, Oleg V.; Lomakin, O. A.

    1994-07-01

    A new method for morphodensitometric analysis of blood cells was applied for medically screening some ecological influence and infection pathologies. A complex algorithm of computational image processing was created for supra molecular restructurings of interphase chromatin of lymphocytes research. It includes specific methods of staining and unifies different quantitative analysis methods. Our experience with the use of a television image analyzer in cytological and immunological studies made it possible to carry out some research in morphometric analysis of chromatin structure in interphase lymphocyte nuclei in genetic and virus pathologies. In our study to characterize lymphocytes as an image-forming system by a rigorous mathematical description we used an approach involving contaminant evaluation of the topography of chromatin network intact and victims' lymphocytes. It is also possible to digitize data, which revealed significant distinctions between control and experiment. The method allows us to observe the minute structural changes in chromatin, especially eu- and hetero-chromatin that were previously studied by genetics only in chromosomes.

  20. SimpleITK Image-Analysis Notebooks: a Collaborative Environment for Education and Reproducible Research.

    PubMed

    Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard

    2018-06-01

    Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .

  1. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  2. Delineation of karst terranes in complex environments: Application of modern developments in the wavelet theory and data mining

    NASA Astrophysics Data System (ADS)

    Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery

    2013-04-01

    Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.

  3. Radiography for intensive care: participatory process analysis in a PACS-equipped and film/screen environment

    NASA Astrophysics Data System (ADS)

    Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes

    2002-05-01

    If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.

  4. A gap in science's and the media images of people who use drugs and sex workers: research on organizations of the oppressed.

    PubMed

    Dziuban, Agata; Friedman, Samuel R

    2015-03-01

    This paper discusses organizations of the oppressed, such as drug user and sex worker groups, and the images of themselves that they construct. We suggest that analysis of these organizationally-produced collective self-images -frequently overlooked in scholarly research -is crucial to understanding the complex internal dynamics of users' and sex workers' organizations and struggles they engage in when defining their collective (organizational) identities and course of action.

  5. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  6. A complex network approach for nanoparticle agglomeration analysis in nanoscale images

    NASA Astrophysics Data System (ADS)

    Machado, Bruno Brandoli; Scabini, Leonardo Felipe; Margarido Orue, Jonatan Patrick; de Arruda, Mauro Santos; Goncalves, Diogo Nunes; Goncalves, Wesley Nunes; Moreira, Raphaell; Rodrigues-Jr, Jose F.

    2017-02-01

    Complex networks have been widely used in science and technology because of their ability to represent several systems. One of these systems is found in Biochemistry, in which the synthesis of new nanoparticles is a hot topic. However, the interpretation of experimental results in the search of new nanoparticles poses several challenges. This is due to the characteristics of nanoparticle images and due to their multiple intricate properties; one property of recurrent interest is the agglomeration of particles. Addressing this issue, this paper introduces an approach that uses complex networks to detect and describe nanoparticle agglomerates so to foster easier and more insightful analyses. In this approach, each detected particle in an image corresponds to a vertice and the distances between the particles define a criterion for creating edges. Edges are created if the distance is smaller than a radius of interest. Once this network is set, we calculate several discrete measures able to reveal the most outstanding agglomerates in a nanoparticle image. Experimental results using images of scanning tunneling microscopy (STM) of gold nanoparticles demonstrated the effectiveness of the proposed approach over several samples, as reflected by the separability between particles in three usual settings. The results also demonstrated efficacy for both convex and non-convex agglomerates.

  7. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  8. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  9. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None

  10. Computer simulation of schlieren images of rotationally symmetric plasma systems: a simple method.

    PubMed

    Noll, R; Haas, C R; Weikl, B; Herziger, G

    1986-03-01

    Schlieren techniques are commonly used methods for quantitative analysis of cylindrical or spherical index of refraction profiles. Many schlieren objects, however, are characterized by more complex geometries, so we have investigated the more general case of noncylindrical, rotationally symmetric distributions of index of refraction n(r,z). Assuming straight ray paths in the schlieren object we have calculated 2-D beam deviation profiles. It is shown that experimental schlieren images of the noncylindrical plasma generated by a plasma focus device can be simulated with these deviation profiles. The computer simulation allows a quantitative analysis of these schlieren images, which yields, for example, the plasma parameters, electron density, and electron density gradients.

  11. An analytical tool that quantifies cellular morphology changes from three-dimensional fluorescence images.

    PubMed

    Haass-Koffler, Carolina L; Naeemuddin, Mohammad; Bartlett, Selena E

    2012-08-31

    The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology even in complex tissue sections. Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells, however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.

  12. Pros and Cons of 3D Image Fusion in Endovascular Aortic Repair: A Systematic Review and Meta-analysis.

    PubMed

    Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M

    2017-08-01

    To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.

  13. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    PubMed

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  14. Fractal analysis of the susceptibility weighted imaging patterns in malignant brain tumors during antiangiogenic treatment: technical report on four cases serially imaged by 7 T magnetic resonance during a period of four weeks.

    PubMed

    Di Ieva, Antonio; Matula, Christian; Grizzi, Fabio; Grabner, Günther; Trattnig, Siegfried; Tschabitscher, Manfred

    2012-01-01

    The need for new and objective indexes for the neuroradiologic follow-up of brain tumors and for monitoring the effects of antiangiogenic strategies in vivo led us to perform a technical study on four patients who received computerized analysis of tumor-associated vasculature with ultra-high-field (7 T) magnetic resonance imaging (MRI). The image analysis involved the application of susceptibility weighted imaging (SWI) to evaluate vascular structures. Four patients affected by recurrent malignant brain tumors were enrolled in the present study. After the first 7-T SWI MRI procedure, the patients underwent antiangiogenic treatment with bevacizumab. The imaging was repeated every 2 weeks for a period of 4 weeks. The SWI patterns visualized in the three MRI temporal sequences were analyzed by means of a computer-aided fractal-based method to objectively quantify their geometric complexity. In two clinically deteriorating patients we found an increase of the geometric complexity of the space-filling properties of the SWI patterns over time despite the antiangiogenic treatment. In one patient, who showed improvement with the therapy, the fractal dimension of the intratumoral structure decreased, whereas in the fourth patient, no differences were found. The qualitative changes of the intratumoral SWI patterns during a period of 4 weeks were quantified with the fractal dimension. Because SWI patterns are also related to the presence of vascular structures, the quantification of their space-filling properties with fractal dimension seemed to be a valid tool for the in vivo neuroradiologic follow-up of brain tumors. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Solution of the problem of superposing image and digital map for detection of new objects

    NASA Astrophysics Data System (ADS)

    Rizaev, I. S.; Miftakhutdinov, D. I.; Takhavova, E. G.

    2018-01-01

    The problem of superposing the map of the terrain with the image of the terrain is considered. The image of the terrain may be represented in different frequency bands. Further analysis of the results of collation the digital map with the image of the appropriate terrain is described. Also the approach to detection of differences between information represented on the digital map and information of the image of the appropriate area is offered. The algorithm for calculating the values of brightness of the converted image area on the original picture is offered. The calculation is based on using information about the navigation parameters and information according to arranged bench marks. For solving the posed problem the experiments were performed. The results of the experiments are shown in this paper. The presented algorithms are applicable to the ground complex of remote sensing data to assess differences between resulting images and accurate geopositional data. They are also suitable for detecting new objects in the image, based on the analysis of the matching the digital map and the image of corresponding locality.

  16. Multispectral analysis tools can increase utility of RGB color images in histology

    NASA Astrophysics Data System (ADS)

    Fereidouni, Farzad; Griffin, Croix; Todd, Austin; Levenson, Richard

    2018-04-01

    Multispectral imaging (MSI) is increasingly finding application in the study and characterization of biological specimens. However, the methods typically used come with challenges on both the acquisition and the analysis front. MSI can be slow and photon-inefficient, leading to long imaging times and possible phototoxicity and photobleaching. The resulting datasets can be large and complex, prompting the development of a number of mathematical approaches for segmentation and signal unmixing. We show that under certain circumstances, just three spectral channels provided by standard color cameras, coupled with multispectral analysis tools, including a more recent spectral phasor approach, can efficiently provide useful insights. These findings are supported with a mathematical model relating spectral bandwidth and spectral channel number to achievable spectral accuracy. The utility of 3-band RGB and MSI analysis tools are demonstrated on images acquired using brightfield and fluorescence techniques, as well as a novel microscopy approach employing UV-surface excitation. Supervised linear unmixing, automated non-negative matrix factorization and phasor analysis tools all provide useful results, with phasors generating particularly helpful spectral display plots for sample exploration.

  17. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  18. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  19. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  20. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  1. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  2. Biologically-inspired data decorrelation for hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.

    2011-12-01

    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  3. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  4. Temporal fractal analysis of the rs-BOLD signal identifies brain abnormalities in autism spectrum disorder.

    PubMed

    Dona, Olga; Hall, Geoffrey B; Noseworthy, Michael D

    2017-01-01

    Brain connectivity in autism spectrum disorders (ASD) has proven difficult to characterize due to the heterogeneous nature of the spectrum. Connectivity in the brain occurs in a complex, multilevel and multi-temporal manner, driving the fluctuations observed in local oxygen demand. These fluctuations can be characterized as fractals, as they auto-correlate at different time scales. In this study, we propose a model-free complexity analysis based on the fractal dimension of the rs-BOLD signal, acquired with magnetic resonance imaging. The fractal dimension can be interpreted as measure of signal complexity and connectivity. Previous studies have suggested that reduction in signal complexity can be associated with disease. Therefore, we hypothesized that a detectable difference in rs-BOLD signal complexity could be observed between ASD patients and Controls. Anatomical and functional data from fifty-five subjects with ASD (12.7 ± 2.4 y/o) and 55 age-matched (14.1 ± 3.1 y/o) healthy controls were accessed through the NITRC database and the ABIDE project. Subjects were scanned using a 3T GE Signa MRI and a 32-channel RF-coil. Axial FSPGR-3D images were used to prescribe rs-BOLD (TE/TR = 30/2000ms) where 300 time points were acquired. Motion correction was performed on the functional data and anatomical and functional images were aligned and spatially warped to the N27 standard brain atlas. Fractal analysis, performed on a grey matter mask, was done by estimating the Hurst exponent in the frequency domain using a power spectral density approach and refining the estimation in the time domain with de-trended fluctuation analysis and signal summation conversion methods. Voxel-wise fractal dimension (FD) was calculated for every subject in the control group and in the ASD group to create ROI-based Z-scores for the ASD patients. Voxel-wise validation of FD normality across controls was confirmed, and non-Gaussian voxels were eliminated from subsequent analysis. To maintain a 95% confidence level, only regions where Z-score values were at least 2 standard deviations away from the mean (i.e. where |Z| > 2.0) were included in the analysis. We found that the main regions, where signal complexity significantly decreased among ASD patients, were the amygdala (p = 0.001), the vermis (p = 0.02), the basal ganglia (p = 0.01) and the hippocampus (p = 0.02). No regions reported significant increase in signal complexity in this study. Our findings were correlated with ADIR and ADOS assessment tools, reporting the highest correlation with the ADOS metrics. Brain connectivity is best modeled as a complex system. Therefore, a measure of complexity as the fractal dimension of fluctuations in brain oxygen demand and utilization could provide important information about connectivity issues in ASD. Moreover, this technique can be used in the characterization of a single subject, with respect to controls, without the need for group analysis. Our novel approach provides an ideal avenue for personalized diagnostics, thus providing unique patient specific assessment that could help in individualizing treatments.

  5. Technical Note: Harmonic analysis applied to MR image distortion fields specific to arbitrarily shaped volumes.

    PubMed

    Stanescu, T; Jaffray, D

    2018-05-25

    Magnetic resonance imaging is expected to play a more important role in radiation therapy given the recent developments in MR-guided technologies. MR images need to consistently show high spatial accuracy to facilitate RT specific tasks such as treatment planning and in-room guidance. The present study investigates a new harmonic analysis method for the characterization of complex 3D fields derived from MR images affected by system-related distortions. An interior Dirichlet problem based on solving the Laplace equation with boundary conditions (BCs) was formulated for the case of a 3D distortion field. The second-order boundary value problem (BVP) was solved using a finite elements method (FEM) for several quadratic geometries - i.e., sphere, cylinder, cuboid, D-shaped, and ellipsoid. To stress-test the method and generalize it, the BVP was also solved for more complex surfaces such as a Reuleaux 9-gon and the MR imaging volume of a scanner featuring a high degree of surface irregularities. The BCs were formatted from reference experimental data collected with a linearity phantom featuring a volumetric grid structure. The method was validated by comparing the harmonic analysis results with the corresponding experimental reference fields. The harmonic fields were found to be in good agreement with the baseline experimental data for all geometries investigated. In the case of quadratic domains, the percentage of sampling points with residual values larger than 1 mm were 0.5% and 0.2% for the axial components and vector magnitude, respectively. For the general case of a domain defined by the available MR imaging field of view, the reference data showed a peak distortion of about 12 mm and 79% of the sampling points carried a distortion magnitude larger than 1 mm (tolerance intrinsic to the experimental data). The upper limits of the residual values after comparison with the harmonic fields showed max and mean of 1.4 mm and 0.25 mm, respectively, with only 1.5% of sampling points exceeding 1 mm. A novel harmonic analysis approach relying on finite element methods was introduced and validated for multiple volumes with surface shape functions ranging from simple to highly complex. Since a boundary value problem is solved the method requires input data from only the surface of the desired domain of interest. It is believed that the harmonic method will facilitate (a) the design of new phantoms dedicated for the quantification of MR image distortions in large volumes and (b) an integrative approach of combining multiple imaging tests specific to radiotherapy into a single test object for routine imaging quality control. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Porphyrin-containing polyaspartamide gadolinium complexes as potential magnetic resonance imaging contrast agents.

    PubMed

    Yan, Guo-Ping; Li, Zhen; Xu, Wei; Zhou, Cheng-Kai; Yang, Lian; Zhang, Qiao; Li, Liang; Liu, Fan; Han, Lin; Ge, Yuan-Xing; Guo, Jun-Fang

    2011-04-04

    Porphyrin-containing polyaspartamide ligands (APTSPP-PHEA-DTPA) were synthesized by the incorporation of diethylenetriaminepentaacetic acid (DTPA) and 5-(4'-aminophenyl)-10,15,20-tris(4'-sulfonatophenyl) porphyrin, trisodium salt (APTSPP) into poly-α,β-[N-(2-hydroxyethyl)-l-aspartamide] (PHEA). These ligands were further reacted with gadolinium chloride to produce macromolecule-gadolinium complexes (APTSPP-PHEA-DTPA-Gd). Experimental data of (1)H NMR, IR, UV and elemental analysis evidenced the formation of the polyaspartamide ligands and gadolinium complexes. In vitro and in vivo property tests indicated that APTSPP-PHEA-DTPA-Gd possessed noticeably higher relaxation effectiveness, less toxicity to HeLa cells, and significantly higher enhanced signal intensities (SI) of the VX2 carcinoma in rabbits with lower injection dose requirement than that of Gd-DTPA. Moreover, APTSPP-PHEA-DTPA-Gd was found to greatly enhance the contrast of MR images of the VX2 carcinoma, providing prolonged intravascular duration, and distinguished the VX2 carcinoma and normal tissues in rabbits according to MR image signal enhancements. These porphyrin-containing polyaspartamide gadolinium complexes can be used as the candidates of contrast agents for targeted MRI to tumors. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  7. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  8. Oufti: An integrated software package for high-accuracy, high-throughput quantitative microscopy analysis

    PubMed Central

    Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine

    2016-01-01

    Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279

  9. Direct-to-digital holography reduction of reference hologram noise and fourier space smearing

    DOEpatents

    Voelkl, Edgar

    2006-06-27

    Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.

  10. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  11. Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images

    NASA Astrophysics Data System (ADS)

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is a new promising technique for early detection of demineralization which can classify healthy and pathological dental tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the performance of automated classification and visualization methods. Cross polarized imaging setup can effectively remove the specular reflections, however is due to the complexity and other imaging setup limitations not always possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.

  12. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  13. Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online

    NASA Astrophysics Data System (ADS)

    Romano, C.; Graff, P. V.; Runco, S.

    2017-12-01

    Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online?Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image.Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project:• Concise explanation of the project, its context, and its purpose;• Including a mention of the funding agency (in this case, NASA);• A preview of the specific tasks required of participants;• A dedicated user interface for the actual citizen science interaction.In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.

  14. Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online

    NASA Technical Reports Server (NTRS)

    Romano, Cia; Graff, Paige V.; Runco, Susan

    2017-01-01

    Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online? Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image. Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project: (1) Concise explanation of the project, its context, and its purpose; (2) Including a mention of the funding agency (in this case, NASA); (3) A preview of the specific tasks required of participants; (4) A dedicated user interface for the actual citizen science interaction. In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.

  15. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields

    PubMed Central

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674

  16. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    PubMed

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  17. Multimodal Nonlinear Optical Imaging for Sensitive Detection of Multiple Pharmaceutical Solid-State Forms and Surface Transformations.

    PubMed

    Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J

    2017-11-07

    Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.

  18. Big data in multiple sclerosis: development of a web-based longitudinal study viewer in an imaging informatics-based eFolder system for complex data analysis and management

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent

    2015-03-01

    In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.

  19. Image classification of human carcinoma cells using complex wavelet-based covariance descriptors.

    PubMed

    Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A Enis; Cetin-Atalay, Rengul

    2013-01-01

    Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-[Formula: see text]WT) coefficients and several morphological attributes are computed. Directionally selective DT-[Formula: see text]WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html.

  20. Image Classification of Human Carcinoma Cells Using Complex Wavelet-Based Covariance Descriptors

    PubMed Central

    Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A. Enis; Cetin-Atalay, Rengul

    2013-01-01

    Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-WT) coefficients and several morphological attributes are computed. Directionally selective DT-WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html. PMID:23341908

  1. Understanding the Organo-Carbonate Associations in Carbonaceous Chondrites with the Use of Micro-Raman Analysis

    NASA Technical Reports Server (NTRS)

    Chan, Q. H. S.; Zolensky, M. E.

    2015-01-01

    Carbonates can potentially provide sites for organic materials to accrue and develop into complex macromolecules. This study examines the organics associated with carbonates in carbonaceous chondrites using micron-Raman imaging.

  2. Understanding the Organo-Carbonate Associations in Carbonaceous Chondrites with the Use of Micro-Raman Analysis

    NASA Technical Reports Server (NTRS)

    Chan, Q. H. S.; Zolensky, M. E.

    2015-01-01

    Carbonates can potentially provide sites for organic materials to accrue and develop into complex macromolecules. This study examines the organics associated with carbonates in carbonaceous chondrites using µ-Raman imaging.

  3. Path (un)predictability of two interacting cracks in polycarbonate sheets using Digital Image Correlation.

    PubMed

    Koivisto, J; Dalbe, M-J; Alava, M J; Santucci, S

    2016-08-31

    Crack propagation is tracked here with Digital Image Correlation analysis in the test case of two cracks propagating in opposite directions in polycarbonate, a material with high ductility and a large Fracture Process Zone (FPZ). Depending on the initial distances between the two crack tips, one may observe different complex crack paths with in particular a regime where the two cracks repel each other prior to being attracted. We show by strain field analysis how this can be understood according to the principle of local symmetry: the propagation is to the direction where the local shear - mode KII in fracture mechanics language - is zero. Thus the interactions exhibited by the cracks arise from symmetry, from the initial geometry, and from the material properties which induce the FPZ. This complexity makes any long-range prediction of the path(s) impossible.

  4. Relating Gestures and Speech: An analysis of students' conceptions about geological sedimentary processes

    NASA Astrophysics Data System (ADS)

    Herrera, Juan Sebastian; Riggs, Eric M.

    2013-08-01

    Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.

  5. Automated Functional Analysis of Astrocytes from Chronic Time-Lapse Calcium Imaging Data

    PubMed Central

    Wang, Yinxue; Shi, Guilai; Miller, David J.; Wang, Yizhi; Wang, Congchao; Broussard, Gerard; Wang, Yue; Tian, Lin; Yu, Guoqiang

    2017-01-01

    Recent discoveries that astrocytes exert proactive regulatory effects on neural information processing and that they are deeply involved in normal brain development and disease pathology have stimulated broad interest in understanding astrocyte functional roles in brain circuit. Measuring astrocyte functional status is now technically feasible, due to recent advances in modern microscopy and ultrasensitive cell-type specific genetically encoded Ca2+ indicators for chronic imaging. However, there is a big gap between the capability of generating large dataset via calcium imaging and the availability of sophisticated analytical tools for decoding the astrocyte function. Current practice is essentially manual, which not only limits analysis throughput but also risks introducing bias and missing important information latent in complex, dynamic big data. Here, we report a suite of computational tools, called Functional AStrocyte Phenotyping (FASP), for automatically quantifying the functional status of astrocytes. Considering the complex nature of Ca2+ signaling in astrocytes and low signal to noise ratio, FASP is designed with data-driven and probabilistic principles, to flexibly account for various patterns and to perform robustly with noisy data. In particular, FASP explicitly models signal propagation, which rules out the applicability of tools designed for other types of data. We demonstrate the effectiveness of FASP using extensive synthetic and real data sets. The findings by FASP were verified by manual inspection. FASP also detected signals that were missed by purely manual analysis but could be confirmed by more careful manual examination under the guidance of automatic analysis. All algorithms and the analysis pipeline are packaged into a plugin for Fiji (ImageJ), with the source code freely available online at https://github.com/VTcbil/FASP. PMID:28769780

  6. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  7. Automated Functional Analysis of Astrocytes from Chronic Time-Lapse Calcium Imaging Data.

    PubMed

    Wang, Yinxue; Shi, Guilai; Miller, David J; Wang, Yizhi; Wang, Congchao; Broussard, Gerard; Wang, Yue; Tian, Lin; Yu, Guoqiang

    2017-01-01

    Recent discoveries that astrocytes exert proactive regulatory effects on neural information processing and that they are deeply involved in normal brain development and disease pathology have stimulated broad interest in understanding astrocyte functional roles in brain circuit. Measuring astrocyte functional status is now technically feasible, due to recent advances in modern microscopy and ultrasensitive cell-type specific genetically encoded Ca 2+ indicators for chronic imaging. However, there is a big gap between the capability of generating large dataset via calcium imaging and the availability of sophisticated analytical tools for decoding the astrocyte function. Current practice is essentially manual, which not only limits analysis throughput but also risks introducing bias and missing important information latent in complex, dynamic big data. Here, we report a suite of computational tools, called Functional AStrocyte Phenotyping (FASP), for automatically quantifying the functional status of astrocytes. Considering the complex nature of Ca 2+ signaling in astrocytes and low signal to noise ratio, FASP is designed with data-driven and probabilistic principles, to flexibly account for various patterns and to perform robustly with noisy data. In particular, FASP explicitly models signal propagation, which rules out the applicability of tools designed for other types of data. We demonstrate the effectiveness of FASP using extensive synthetic and real data sets. The findings by FASP were verified by manual inspection. FASP also detected signals that were missed by purely manual analysis but could be confirmed by more careful manual examination under the guidance of automatic analysis. All algorithms and the analysis pipeline are packaged into a plugin for Fiji (ImageJ), with the source code freely available online at https://github.com/VTcbil/FASP.

  8. Dynamic three-dimensional phase-contrast technique in MRI: application to complex flow analysis around the artificial heart valve

    NASA Astrophysics Data System (ADS)

    Kim, Soo Jeong; Lee, Dong Hyuk; Song, Inchang; Kim, Nam Gook; Park, Jae-Hyeung; Kim, JongHyo; Han, Man Chung; Min, Byong Goo

    1998-07-01

    Phase-contrast (PC) method of magnetic resonance imaging (MRI) has bee used for quantitative measurements of flow velocity and volume flow rate. It is a noninvasive technique which provides an accurate two-dimensional velocity image. Moreover, Phase Contrast Cine magnetic resonance imaging combines the flow dependent contrast of PC-MRI with the ability of cardiac cine imaging to produce images throughout the cardiac cycle. However, the accuracy of the data acquired from the single through-plane velocity encoding can be reduced by the effect of flow direction, because in many practical cases flow directions are not uniform throughout the whole region of interest. In this study, we present dynamic three-dimensional velocity vector mapping method using PC-MRI which can visualize the complex flow pattern through 3D volume rendered images displayed dynamically. The direction of velocity mapping can be selected along any three orthogonal axes. By vector summation, the three maps can be combined to form a velocity vector map that determines the velocity regardless of the flow direction. At the same time, Cine method is used to observe the dynamic change of flow. We performed a phantom study to evaluate the accuracy of the suggested PC-MRI in continuous and pulsatile flow measurement. Pulsatile flow wave form is generated by the ventricular assistant device (VAD), HEMO-PULSA (Biomedlab, Seoul, Korea). We varied flow velocity, pulsatile flow wave form, and pulsing rate. The PC-MRI-derived velocities were compared with Doppler-derived results. The velocities of the two measurements showed a significant linear correlation. Dynamic three-dimensional velocity vector mapping was carried out for two cases. First, we applied to the flow analysis around the artificial heart valve in a flat phantom. We could observe the flow pattern around the valve through the 3-dimensional cine image. Next, it is applied to the complex flow inside the polymer sac that is used as ventricle in totally implantable artificial heart (TAH). As a result we could observe the flow pattern around the valves of the sac, though complex flow can not be detected correctly in the conventional phase contrast method. In addition, we could calculate the cardiac output from TAH sac by quantitative measurement of the volume of flow across the outlet valve.

  9. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  10. Exploratory analysis of TOF-SIMS data from biological surfaces

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Seetharaman; Fletcher, John S.; Henderson, Alex; Lockyer, Nicholas P.; Vickerman, John C.

    2008-12-01

    The application of multivariate analytical tools enables simplification of TOF-SIMS datasets so that useful information can be extracted from complex spectra and images, especially those that do not give readily interpretable results. There is however a challenge in understanding the outputs from such analyses. The problem is complicated when analysing images, given the additional dimensions in the dataset. Here we demonstrate how the application of simple pre-processing routines can enable the interpretation of TOF-SIMS spectra and images. For the spectral data, TOF-SIMS spectra used to discriminate bacterial isolates associated with urinary tract infection were studied. Using different criteria for picking peaks before carrying out PC-DFA enabled identification of the discriminatory information with greater certainty. For the image data, an air-dried salt stressed bacterial sample, discussed in another paper by us in this issue, was studied. Exploration of the image datasets with and without normalisation prior to multivariate analysis by PCA or MAF resulted in different regions of the image being highlighted by the techniques.

  11. A scale-based connected coherence tree algorithm for image segmentation.

    PubMed

    Ding, Jundi; Ma, Runing; Chen, Songcan

    2008-02-01

    This paper presents a connected coherence tree algorithm (CCTA) for image segmentation with no prior knowledge. It aims to find regions of semantic coherence based on the proposed epsilon-neighbor coherence segmentation criterion. More specifically, with an adaptive spatial scale and an appropriate intensity-difference scale, CCTA often achieves several sets of coherent neighboring pixels which maximize the probability of being a single image content (including kinds of complex backgrounds). In practice, each set of coherent neighboring pixels corresponds to a coherence class (CC). The fact that each CC just contains a single equivalence class (EC) ensures the separability of an arbitrary image theoretically. In addition, the resultant CCs are represented by tree-based data structures, named connected coherence tree (CCT)s. In this sense, CCTA is a graph-based image analysis algorithm, which expresses three advantages: 1) its fundamental idea, epsilon-neighbor coherence segmentation criterion, is easy to interpret and comprehend; 2) it is efficient due to a linear computational complexity in the number of image pixels; 3) both subjective comparisons and objective evaluation have shown that it is effective for the tasks of semantic object segmentation and figure-ground separation in a wide variety of images. Those images either contain tiny, long and thin objects or are severely degraded by noise, uneven lighting, occlusion, poor illumination, and shadow.

  12. The Hidden Fortress: structure and substructure of the complex strong lensing cluster SDSS J1029+2623

    NASA Astrophysics Data System (ADS)

    Oguri, Masamune; Schrabback, Tim; Jullo, Eric; Ota, Naomi; Kochanek, Christopher S.; Dai, Xinyu; Ofek, Eran O.; Richards, Gordon T.; Blandford, Roger D.; Falco, Emilio E.; Fohlmeister, Janine

    2013-02-01

    We present Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) and Wide Field Camera 3 (WFC3) observations of SDSS J1029+2623, a three-image quasar lens system produced by a foreground cluster at z = 0.584. Our strong lensing analysis reveals six additional multiply imaged galaxies in addition to the multiply imaged quasar. We confirm the complex nature of the mass distribution of the lensing cluster, with a bimodal dark matter distribution which deviates from the Chandra X-ray surface brightness distribution. The Einstein radius of the lensing cluster is estimated to be θE = 15.2 ± 0.5 arcsec for the quasar redshift of z = 2.197. We derive a radial mass distribution from the combination of strong lensing, HST/ACS weak lensing and Subaru/Suprime-cam weak lensing analysis results, finding a best-fitting virial mass of Mvir = 1.55+ 0.40- 0.35 × 1014 h- 1 M⊙ and a concentration parameter of cvir = 25.7+ 14.1- 7.5. The lensing mass estimate at the outer radius is smaller than the X-ray mass estimate by a factor of ˜2. We ascribe this large mass discrepancy to shock heating of the intracluster gas during a merger, which is also suggested by the complex mass and gas distributions and the high value of the concentration parameter. In the HST image, we also identify a probable galaxy, GX, in the vicinity of the faintest quasar image C. In strong lens models, the inclusion of GX explains the anomalous flux ratios between the quasar images. The morphology of the highly elongated quasar host galaxy is also well reproduced. The best-fitting model suggests large total magnifications of 30 for the quasar and 35 for the quasar host galaxy, and has an AB time delay consistent with the measured value.

  13. Visualization and characterization of engineered nanoparticles in complex environmental and food matrices using atmospheric scanning electron microscopy.

    PubMed

    Luo, P; Morrison, I; Dudkiewicz, A; Tiede, K; Boyes, E; O'Toole, P; Park, S; Boxall, A B

    2013-04-01

    Imaging and characterization of engineered nanoparticles (ENPs) in water, soils, sediment and food matrices is very important for research into the risks of ENPs to consumers and the environment. However, these analyses pose a significant challenge as most existing techniques require some form of sample manipulation prior to imaging and characterization, which can result in changes in the ENPs in a sample and in the introduction of analytical artefacts. This study therefore explored the application of a newly designed instrument, the atmospheric scanning electron microscope (ASEM), which allows the direct characterization of ENPs in liquid matrices and which therefore overcomes some of the limitations associated with existing imaging methods. ASEM was used to characterize the size distribution of a range of ENPs in a selection of environmental and food matrices, including supernatant of natural sediment, test medium used in ecotoxicology studies, bovine serum albumin and tomato soup under atmospheric conditions. The obtained imaging results were compared to results obtained using conventional imaging by transmission electron microscope (TEM) and SEM as well as to size distribution data derived from nanoparticle tracking analysis (NTA). ASEM analysis was found to be a complementary technique to existing methods that is able to visualize ENPs in complex liquid matrices and to provide ENP size information without extensive sample preparation. ASEM images can detect ENPs in liquids down to 30 nm and to a level of 1 mg L(-1) (9×10(8) particles mL(-1) , 50 nm Au ENPs). The results indicate ASEM is a highly complementary method to existing approaches for analyzing ENPs in complex media and that its use will allow those studying to study ENP behavior in situ, something that is currently extremely challenging to do. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  14. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; Thomas, J. D.

    2000-01-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  15. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography.

    PubMed

    Qin, J X; Shiota, T; Thomas, J D

    2000-11-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  16. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  17. Hybrid Electron Microscopy Normal Mode Analysis graphical interface and protocol.

    PubMed

    Sorzano, Carlos Oscar S; de la Rosa-Trevín, José Miguel; Tama, Florence; Jonić, Slavica

    2014-11-01

    This article presents an integral graphical interface to the Hybrid Electron Microscopy Normal Mode Analysis (HEMNMA) approach that was developed for capturing continuous motions of large macromolecular complexes from single-particle EM images. HEMNMA was shown to be a good approach to analyze multiple conformations of a macromolecular complex but it could not be widely used in the EM field due to a lack of an integral interface. In particular, its use required switching among different software sources as well as selecting modes for image analysis was difficult without the graphical interface. The graphical interface was thus developed to simplify the practical use of HEMNMA. It is implemented in the open-source software package Xmipp 3.1 (http://xmipp.cnb.csic.es) and only a small part of it relies on MATLAB that is accessible through the main interface. Such integration provides the user with an easy way to perform the analysis of macromolecular dynamics and forms a direct connection to the single-particle reconstruction process. A step-by-step HEMNMA protocol with the graphical interface is given in full details in Supplementary material. The graphical interface will be useful to experimentalists who are interested in studies of continuous conformational changes of macromolecular complexes beyond the modeling of continuous heterogeneity in single particle reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Aptamer-conjugated nanobubbles for targeted ultrasound molecular imaging.

    PubMed

    Wang, Chung-Hsin; Huang, Yu-Fen; Yeh, Chih-Kuang

    2011-06-07

    Targeted ultrasound contrast agents can be prepared by some specific bioconjugation techniques. The biotin-avidin complex is an extremely useful noncovalent binding system, but the system might induce immunogenic side effects in human bodies. Previous proposed covalently conjugated systems suffered from low conjugation efficiency and complex procedures. In this study, we propose a covalently conjugated nanobubble coupling with nucleic acid ligands, aptamers, for providing a higher specific affinity for ultrasound targeting studies. The sgc8c aptamer was linked with nanobubbles through thiol-maleimide coupling chemistry for specific targeting to CCRF-CEM cells. Further improvements to reduce the required time and avoid the degradation of nanobubbles during conjugation procedures were also made. Several investigations were used to discuss the performance and consistency of the prepared nanobubbles, such as size distribution, conjugation efficiency analysis, and flow cytometry assay. Further, we applied our conjugated nanobubbles to ex vivo ultrasound targeted imaging and compared the resulting images with optical images. The results indicated the availability of aptamer-conjugated nanobubbles in targeted ultrasound imaging and the practicability of using a highly sensitive ultrasound system in noninvasive biological research.

  19. Encapsulated Papillary Carcinoma in A Man with Gynecomastia: Ultrasonography, Mammography and Magnetic Resonance Imaging Features with Pathologic Correlation.

    PubMed

    Yılmaz, Ravza; Cömert, Rana Günöz; Aliyev, Samil; Toktaş, Yücel; Önder, Semen; Emirikçi, Selman; Özmen, Vahit

    2018-04-01

    Male breast cancer is an uncommon disease that constitutes 1% of all breast cancers and encapsulated papillary carcinoma (EPC) is a rare subtype of malignant male diseases. Gynecomastia is the most common disease of the male breast. We report a 63-year-old male patient with EPC accompanied by gynecomastia that was diagnosed and treated at our breast center. Mammography showed an oval-shaped dense mass with circumscribed margins on the ground of nodular gynecomastia. On ultrasonographic exam, we saw a well-circumscribed complex mass with a solid component which was vascular on Doppler ultrasonography. Magnetic resonance imaging revealed a complex cystic mass containing solid components. Dynamic images showed enhancement of the cystic mass wall and mural components. Tumor stage was evaluated as T2N0. The lesion's histologic examination and immunohistochemical analysis by showing no myoepithelial layer revealed an encapsulated papillary carcinoma. To our knowledge, this is the first case report which describes MR imaging findings of male breast encapsulated papillary cancer.

  20. Fiber estimation and tractography in diffusion MRI: Development of simulated brain images and comparison of multi-fiber analysis methods at clinical b-values

    PubMed Central

    Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha

    2015-01-01

    Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, and consequently tractography and the ability to recover complex white-matter pathways, as well as differences between results due to choice of analysis method and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment “ball and stick” model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm2) common to clinical studies. We found the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for a complex three-fiber crossing region. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project “Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations” at http://www.nitrc.org/projects/sim_dwi_brain PMID:25555998

  1. Fiber estimation and tractography in diffusion MRI: development of simulated brain images and comparison of multi-fiber analysis methods at clinical b-values.

    PubMed

    Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha

    2015-04-01

    Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, tractography and the ability to recover complex white-matter pathways, differences between results due to choice of analysis method, and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work, we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment "ball and stick" model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm(2)) common to clinical studies. We found that the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for complex three-fiber crossing regions. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project "Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations" at http://www.nitrc.org/projects/sim_dwi_brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Hierarchical classification strategy for Phenotype extraction from epidermal growth factor receptor endocytosis screening.

    PubMed

    Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J

    2016-05-03

    Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.

  3. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  4. Ratioed scatter diagrams - An erotetic method for phase identification on complex surfaces using scanning Auger microscopy

    NASA Technical Reports Server (NTRS)

    Browning, R.

    1984-01-01

    By ratioing multiple Auger intensities and plotting a two-dimensional occupational scatter diagram while digitally scanning across an area, the number and elemental association of surface phases can be determined. This can prove a useful tool in scanning Auger microscopic analysis of complex materials. The technique is illustrated by results from an anomalous region on the reaction zone of a SiC/Ti-6Al-4V metal matrix composite material. The anomalous region is shown to be a single phase associated with sulphur and phosphorus impurities. Imaging of a selected phase from the ratioed scatter diagram is possible and may be a useful technique for presenting multiple scanning Auger images.

  5. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  6. Application of failure mode and effect analysis in a radiology department.

    PubMed

    Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B

    2011-01-01

    With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010

  7. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    PubMed

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  9. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  10. Multimodal image analysis of clinical influences on preterm brain development

    PubMed Central

    Ball, Gareth; Aljabar, Paul; Nongena, Phumza; Kennea, Nigel; Gonzalez‐Cinca, Nuria; Falconer, Shona; Chew, Andrew T.M.; Harper, Nicholas; Wurie, Julia; Rutherford, Mary A.; Edwards, A. David

    2017-01-01

    Objective Premature birth is associated with numerous complex abnormalities of white and gray matter and a high incidence of long‐term neurocognitive impairment. An integrated understanding of these abnormalities and their association with clinical events is lacking. The aim of this study was to identify specific patterns of abnormal cerebral development and their antenatal and postnatal antecedents. Methods In a prospective cohort of 449 infants (226 male), we performed a multivariate and data‐driven analysis combining multiple imaging modalities. Using canonical correlation analysis, we sought separable multimodal imaging markers associated with specific clinical and environmental factors and correlated to neurodevelopmental outcome at 2 years. Results We found five independent patterns of neuroanatomical variation that related to clinical factors including age, prematurity, sex, intrauterine complications, and postnatal adversity. We also confirmed the association between imaging markers of neuroanatomical abnormality and poor cognitive and motor outcomes at 2 years. Interpretation This data‐driven approach defined novel and clinically relevant imaging markers of cerebral maldevelopment, which offer new insights into the nature of preterm brain injury. Ann Neurol 2017;82:233–246 PMID:28719076

  11. Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.

    PubMed

    Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V

    2014-06-10

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field.

  12. Remote sensing of the Fram Strait marginal ice zone

    USGS Publications Warehouse

    Shuchman, R.A.; Burns, B.A.; Johannessen, O.M.; Josberger, E.G.; Campbell, W.J.; Manley, T.O.; Lannelongue, N.

    1987-01-01

    Sequential remote sensing images of the Fram Strait marginal ice zone played a key role in elucidating the complex interactions of the atmosphere, ocean, and sea ice. Analysis of a subset of these images covering a 1-week period provided quantitative data on the mesoscale ice morphology, including ice edge positions, ice concentrations, floe size distribution, and ice kinematics. The analysis showed that, under light to moderate wind conditions, the morphology of the marginal ice zone reflects the underlying ocean circulation. High-resolution radar observations showed the location and size of ocean eddies near the ice edge. Ice kinematics from sequential radar images revealed an ocean eddy beneath the interior pack ice that was verified by in situ oceanographic measurements.

  13. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain.

    PubMed

    Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša

    2017-01-01

    This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  14. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  15. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  16. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  17. Monitoring of Freezing Dynamics in Trees: A Simple Phase Shift Causes Complexity1[OPEN

    PubMed Central

    Charra-Vaskou, Katline

    2017-01-01

    During winter, trees have to cope with harsh conditions, including extreme freeze-thaw stress. This study focused on ice nucleation and propagation, related water shifts and xylem cavitation, as well as cell damage and was based on in situ monitoring of xylem (thermocouples) and surface temperatures (infrared imaging), ultrasonic emissions, and dendrometer analysis. Field experiments during late winter on Picea abies growing at the alpine timberline revealed three distinct freezing patterns: (1) from the top of the tree toward the base, (2) from thin branches toward the main stem’s top and base, and (3) from the base toward the top. Infrared imaging showed freezing within branches from their base toward distal parts. Such complex freezing causes dynamic and heterogenous patterns in water potential and probably in cavitation. This study highlights the interaction between environmental conditions upon freezing and thawing and demonstrates the enormous complexity of freezing processes in trees. Diameter shrinkage, which indicated water fluxes within the stem, and acoustic emission analysis, which indicated cavitation events near the ice front upon freezing, were both related to minimum temperature and, upon thawing, related to vapor pressure deficit and soil temperature. These complex patterns, emphasizing the common mechanisms between frost and drought stress, shed new light on winter tree physiology. PMID:28242655

  18. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  19. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  20. Laser Ablation-Aerosol Mass Spectrometry-Chemical Ionization Mass Spectrometry for Ambient Surface Imaging

    DOE PAGES

    Berry, Jennifer L.; Day, Douglas A.; Elseberg, Tim; ...

    2018-02-20

    Mass spectrometry imaging is becoming an increasingly common analytical technique due to its ability to provide spatially resolved chemical information. In this paper, we report a novel imaging approach combining laser ablation with two mass spectrometric techniques, aerosol mass spectrometry and chemical ionization mass spectrometry, separately and in parallel. Both mass spectrometric methods provide the fast response, rapid data acquisition, low detection limits, and high-resolution peak separation desirable for imaging complex samples. Additionally, the two techniques provide complementary information with aerosol mass spectrometry providing near universal detection of all aerosol molecules and chemical ionization mass spectrometry with a heated inletmore » providing molecular-level detail of both gases and aerosols. The two techniques operate with atmospheric pressure interfaces and require no matrix addition for ionization, allowing for samples to be investigated in their native state under ambient pressure conditions. We demonstrate the ability of laser ablation-aerosol mass spectrometry-chemical ionization mass spectrometry (LA-AMS-CIMS) to create 2D images of both standard compounds and complex mixtures. Finally, the results suggest that LA-AMS-CIMS, particularly when combined with advanced data analysis methods, could have broad applications in mass spectrometry imaging applications.« less

  1. Laser Ablation-Aerosol Mass Spectrometry-Chemical Ionization Mass Spectrometry for Ambient Surface Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jennifer L.; Day, Douglas A.; Elseberg, Tim

    Mass spectrometry imaging is becoming an increasingly common analytical technique due to its ability to provide spatially resolved chemical information. In this paper, we report a novel imaging approach combining laser ablation with two mass spectrometric techniques, aerosol mass spectrometry and chemical ionization mass spectrometry, separately and in parallel. Both mass spectrometric methods provide the fast response, rapid data acquisition, low detection limits, and high-resolution peak separation desirable for imaging complex samples. Additionally, the two techniques provide complementary information with aerosol mass spectrometry providing near universal detection of all aerosol molecules and chemical ionization mass spectrometry with a heated inletmore » providing molecular-level detail of both gases and aerosols. The two techniques operate with atmospheric pressure interfaces and require no matrix addition for ionization, allowing for samples to be investigated in their native state under ambient pressure conditions. We demonstrate the ability of laser ablation-aerosol mass spectrometry-chemical ionization mass spectrometry (LA-AMS-CIMS) to create 2D images of both standard compounds and complex mixtures. Finally, the results suggest that LA-AMS-CIMS, particularly when combined with advanced data analysis methods, could have broad applications in mass spectrometry imaging applications.« less

  2. Local spatial frequency analysis for computer vision

    NASA Technical Reports Server (NTRS)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  3. Cysteamine-based cell-permeable Zn(2+)-specific molecular bioimaging materials: from animal to plant cells.

    PubMed

    Sinha, Sougata; Dey, Gourab; Kumar, Sunil; Mathew, Jomon; Mukherjee, Trinetra; Mukherjee, Subhrakanti; Ghosh, Subrata

    2013-11-27

    Structure-interaction/fluorescence relationship studies led to the development of a small chemical library of Zn(2+)-specific cysteamine-based molecular probes. The probe L5 with higher excitation/emission wavelengths, which absorbs in the visible region and emits in the green, was chosen as a model imaging material for biological studies. After successful imaging of intracellular zinc in four different kinds of cells including living organisms, plant, and animal cells, in vivo imaging potential of L5 was evaluated using plant systems. In vivo imaging of translocation of zinc through the stem of a small herb with a transparent stem, Peperomia pellucida, confirmed the stability of L5 inside biological systems and the suitability of L5 for real-time analysis. Similarly, fluorescence imaging of zinc in gram sprouts revealed the efficacy of the probe in the detection and localization of zinc in cereal crops. This imaging technique will help in knowing the efficiency of various techniques used for zinc enrichment of cereal crops. Computational analyses were carried out to better understand the structure, the formation of probe-Zn(2+) complexes, and the emission properties of these complexes.

  4. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  5. Gadolinium heteropoly complex K 17[Gd(P 2W 17O 61) 2] as a potential MRI contrast agent

    NASA Astrophysics Data System (ADS)

    Sun, Guoying; Feng, Jianghua; Wu, Huifeng; Pei, Fengkui; Fang, Ke; Lei, Hao

    2004-10-01

    Gadolinium heteropoly complex K17[Gd(P2W17O61)2] has been evaluated by in vitro and in vivo experiments as a potential contrast agent for magnetic resonance imaging (MRI). The thermal analysis and conductivity study indicate that this complex has good thermal stability and wide pH stability range. The T1 relaxivity is 7.59 mM-1 s-1 in aqueous solution and 7.97 mM-1 s-1 in 0.725 mmol l-1 bovine serum albumin (BSA) solution at 25 °C and 9.39 T, respectively. MR imaging of three male Sprague-Dawley rats showed remarkable enhancement in rat liver after intravenous injection, which persisted longer than with Gd-DTPA. The signal intensity increased by 57.1±16.9% during the whole imaging period at 0.082 mmol kg-1dose. Our preliminary in vitro and in vivo studies indicate that K17[Gd(P2W17O61)2] is a potential liver-specific MRI contrast agent.

  6. Intelligence and cortical thickness in children with complex partial seizures.

    PubMed

    Tosun, Duygu; Caplan, Rochelle; Siddarth, Prabha; Seidenberg, Michael; Gurbani, Suresh; Toga, Arthur W; Hermann, Bruce

    2011-07-15

    Prior studies on healthy children have demonstrated regional variations and a complex and dynamic relationship between intelligence and cerebral tissue. Yet, there is little information regarding the neuroanatomical correlates of general intelligence in children with epilepsy compared to healthy controls. In vivo imaging techniques, combined with methods for advanced image processing and analysis, offer the potential to examine quantitative mapping of brain development and its abnormalities in childhood epilepsy. A surface-based, computational high resolution 3-D magnetic resonance image analytic technique was used to compare the relationship of cortical thickness with age and intelligence quotient (IQ) in 65 children and adolescents with complex partial seizures (CPS) and 58 healthy controls, aged 6-18 years. Children were grouped according to health status (epilepsy; controls) and IQ level (average and above; below average) and compared on age-related patterns of cortical thickness. Our cross-sectional findings suggest that disruption in normal age-related cortical thickness expression is associated with intelligence in pediatric CPS patients both with average and below average IQ scores. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    PubMed

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  8. Extracting microtubule networks from superresolution single-molecule localization microscopy data

    PubMed Central

    Zhang, Zhen; Nishimura, Yukako; Kanchanawong, Pakorn

    2017-01-01

    Microtubule filaments form ubiquitous networks that specify spatial organization in cells. However, quantitative analysis of microtubule networks is hampered by their complex architecture, limiting insights into the interplay between their organization and cellular functions. Although superresolution microscopy has greatly facilitated high-resolution imaging of microtubule filaments, extraction of complete filament networks from such data sets is challenging. Here we describe a computational tool for automated retrieval of microtubule filaments from single-molecule-localization–based superresolution microscopy images. We present a user-friendly, graphically interfaced implementation and a quantitative analysis of microtubule network architecture phenotypes in fibroblasts. PMID:27852898

  9. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  10. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  11. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  12. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    NASA Astrophysics Data System (ADS)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  13. Chandra/ACIS Observations of the 30 Doradus Star-Forming Complex

    NASA Astrophysics Data System (ADS)

    Townsley, Leisa; Broos, Patrick; Feigelson, Eric; Burrows, David; Chu, You-Hua; Garmire, Gordon; Griffiths, Richard; Maeda, Yoshitomo; Pavlov, George; Tsuboi, Yohko

    2002-04-01

    30 Doradus is the archetype giant extragalactic H II region, a massive star-forming complex in the Large Magellanic Cloud. We examine high-spatial-resolution X-ray images and spectra of the essential parts of 30 Doradus, obtained with the Advanced CCD Imaging Spectrometer (ACIS) aboard the Chandra X-ray Observatory. The central cluster of young high-mass stars, R136, is resolved at the arcsecond level, allowing spectral analysis of bright constituents; other OB/Wolf-Rayet binaries and multiple systems (e.g. R139, R140) are also detected. Spatially-resolved spectra are presented for N157B, the composite SNR containing a 16-msec pulsar. The spectrally soft superbubble structures seen by ROSAT are dramatically imaged by Chandra; we explore the spectral differences they exhibit. Taken together, the components of 30 Doradus give us an excellent microscopic view of high-energy phenomena seen on larger scales in more distant galaxies as starbursts and galactic winds.

  14. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  15. New method for estimation of fluence complexity in IMRT fields and correlation with gamma analysis

    NASA Astrophysics Data System (ADS)

    Hanušová, T.; Vondráček, V.; Badraoui-Čuprová, K.; Horáková, I.; Koniarová, I.

    2015-01-01

    A new method for estimation of fluence complexity in Intensity Modulated Radiation Therapy (IMRT) fields is proposed. Unlike other previously published works, it is based on portal images calculated by the Portal Dose Calculation algorithm in Eclipse (version 8.6, Varian Medical Systems) in the plane of the EPID aS500 detector (Varian Medical Systems). Fluence complexity is given by the number and the amplitudes of dose gradients in these matrices. Our method is validated using a set of clinical plans where fluence has been smoothed manually so that each plan has a different level of complexity. Fluence complexity calculated with our tool is in accordance with the different levels of smoothing as well as results of gamma analysis, when calculated and measured dose matrices are compared. Thus, it is possible to estimate plan complexity before carrying out the measurement. If appropriate thresholds are determined which would distinguish between acceptably and overly modulated plans, this might save time in the re-planning and re-measuring process.

  16. PHOG analysis of self-similarity in aesthetic images

    NASA Astrophysics Data System (ADS)

    Amirshahi, Seyed Ali; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2012-03-01

    In recent years, there have been efforts in defining the statistical properties of aesthetic photographs and artworks using computer vision techniques. However, it is still an open question how to distinguish aesthetic from non-aesthetic images with a high recognition rate. This is possibly because aesthetic perception is influenced also by a large number of cultural variables. Nevertheless, the search for statistical properties of aesthetic images has not been futile. For example, we have shown that the radially averaged power spectrum of monochrome artworks of Western and Eastern provenance falls off according to a power law with increasing spatial frequency (1/f2 characteristics). This finding implies that this particular subset of artworks possesses a Fourier power spectrum that is self-similar across different scales of spatial resolution. Other types of aesthetic images, such as cartoons, comics and mangas also display this type of self-similarity, as do photographs of complex natural scenes. Since the human visual system is adapted to encode images of natural scenes in a particular efficient way, we have argued that artists imitate these statistics in their artworks. In support of this notion, we presented results that artists portrait human faces with the self-similar Fourier statistics of complex natural scenes although real-world photographs of faces are not self-similar. In view of these previous findings, we investigated other statistical measures of self-similarity to characterize aesthetic and non-aesthetic images. In the present work, we propose a novel measure of self-similarity that is based on the Pyramid Histogram of Oriented Gradients (PHOG). For every image, we first calculate PHOG up to pyramid level 3. The similarity between the histograms of each section at a particular level is then calculated to the parent section at the previous level (or to the histogram at the ground level). The proposed approach is tested on datasets of aesthetic and non-aesthetic categories of monochrome images. The aesthetic image datasets comprise a large variety of artworks of Western provenance. Other man-made aesthetically pleasing images, such as comics, cartoons and mangas, were also studied. For comparison, a database of natural scene photographs is used, as well as datasets of photographs of plants, simple objects and faces that are in general of low aesthetic value. As expected, natural scenes exhibit the highest degree of PHOG self-similarity. Images of artworks also show high selfsimilarity values, followed by cartoons, comics and mangas. On average, other (non-aesthetic) image categories are less self-similar in the PHOG analysis. A measure of scale-invariant self-similarity (PHOG) allows a good separation of the different aesthetic and non-aesthetic image categories. Our results provide further support for the notion that, like complex natural scenes, images of artworks display a higher degree of self-similarity across different scales of resolution than other image categories. Whether the high degree of self-similarity is the basis for the perception of beauty in both complex natural scenery and artworks remains to be investigated.

  17. Complex Tectonism on Ganymede

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Complex tectonism is evident in these images of Ganymede's surface. The solid state imaging camera on NASA's Galileo spacecraft imaged this region as it passed Ganymede during its second orbit through the Jovian system. The 80 kilometer (50 mile) wide lens-shaped feature in the center of the image is located at 32 degrees latitude and 188 degrees longitude along the border of a region of ancient dark terrain known as Marius Regio, and is near an area of younger bright terrain named Nippur Sulcus. The tectonism that created the structures in the bright terrain nearby has strongly affected the local dark terrain to form unusual structures such as the one shown here. The lens-like appearance of this feature is probably due to shearing of the surface, where areas have slid past each other and also rotated slightly. Note that in several places in these images, especially around the border of the lens-shaped feature, bright ridges appear to turn into dark grooves. Analysis of the geologic structures in areas like this are helping scientists to understand the complex tectonic history of Ganymede.

    North is to the top-left of the image, and the sun illuminates the surface from the southeast. The image covers an area about 63 kilometers (39 miles) by 120 kilometers (75 miles) across at a resolution of 188 meters (627 feet) per picture element. The images were taken on September 6, 1996 at a range of 18,522 kilometers (11,576 miles) by the solid state imaging (CCD) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  18. DIGITAL IMAGE ANALYSIS OF ZOSTERA MARINA LEAF INJURY

    EPA Science Inventory

    Current methods for assessing leaf injury in Zostera marina (eelgrass) utilize subjective indexes for desiccation injury and wasting disease. Because of the subjective nature of these measures, they are inherently imprecise making them difficult to use in quantifying complex leaf...

  19. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  20. Genomic Diversity and the Microenvironment as Drivers of Progression in DCIS

    DTIC Science & Technology

    2016-10-01

    been acquiring new skills in medical image analysis and learning about the complexities of breast cancer diagnosis. How were the results...database and medical record searching at Duke, 2) Development of methods for isolating DNA from archival DCIS lesions, 3) Deep and comprehensive...on the Aim 3 results to the SPIE Medical Imaging Conference to be held in February 2017. If accepted, those will each be published in the form of a

  1. High-resolution 3D laser imaging based on tunable fiber array link

    NASA Astrophysics Data System (ADS)

    Zhao, Sisi; Ruan, Ningjuan; Yang, Song

    2017-10-01

    Airborne photoelectric reconnaissance system with the bore sight down to the ground is an important battlefield situational awareness system, which can be used for reconnaissance and surveillance of complex ground scene. Airborne 3D imaging Lidar system is recognized as the most potential candidates for target detection under the complex background, and is progressing in the directions of high resolution, long distance detection, high sensitivity, low power consumption, high reliability, eye safe and multi-functional. However, the traditional 3D laser imaging system has the disadvantages of lower imaging resolutions because of the small size of the existing detector, and large volume. This paper proposes a high resolution laser 3D imaging technology based on the tunable optical fiber array link. The echo signal is modulated by a tunable optical fiber array link and then transmitted to the focal plane detector. The detector converts the optical signal into electrical signals which is given to the computer. Then, the computer accomplishes the signal calculation and image restoration based on modulation information, and then reconstructs the target image. This paper establishes the mathematical model of tunable optical fiber array signal receiving link, and proposes the simulation and analysis of the affect factors on high density multidimensional point cloud reconstruction.

  2. Science - Image in Action

    NASA Astrophysics Data System (ADS)

    Zavidovique, Bertrand; Lo Bosco, Giosue'

    pt. A. Information: data organization and communication. Statistical information: a Bayesian perspective / R. B. Stern, C. A. de B. Pereira. Multi-a(ge)nt graph patrolling and partitioning / Y. Elor, A. M. Bruckstein. The role of noise in brain function / S. Roy, R. Llinas. F-granulation, generalized rough entropy and image analysis / S. K. Pal. Fast redshift clustering with the Baire (ultra) metric / F. Murtagh, P. Contreras. Interactive classification oriented superresolution of multispectral images / P. Ruiz ... [et al.]. Blind processing in astrophysical data analysis / E. Salerno, L. Bedini. The extinction map of the orion molecular cloud / G. Scandariato (best student's paper), I. Pagano, M. Robberto -- pt. B. System: structure and behaviour. Common grounds: the role of perception in science and the nature of transitions / G. Bernroider. Looking the world from inside: intrinsic geometry of complex systems / L. Boi. The butterfly and the photon: new perspectives on unpredictability, and the notion of casual reality, in quantum physics / T. N. Palmer. Self-replicated wave patterns in neural networks with complex threshold / V. I. Nekorkin. A local explication of causation / G. Boniolo, R. Faraldo, A. Saggion. Evolving complexity, cognition, and consciousness / H. Liljenstrom. Self-assembly, modularity and physical complexity / S. E. Ahnert. The category of topological thermodynamics / R. M. Kiehn. Anti-phase spiking patterns / M. P. Igaev, A. S. Dmitrichev, V. I. Nekorkin -- pt. C. Data/system representation. Reality, models and representations: the case of galaxies, intelligence and avatars / J-C. Heudin. Astronomical images and data mining in the international virtual observatory context / F. Pasian, M. Brescia, G. Longo. Dame: a web oriented infrastructure for scientific data mining and exploration / S. Cavuoti ... [et al.]. Galactic phase spaces / D. Chakrabarty. From data to images: a shape based approach for fluorescence tomography / O. Dorn, K. E. Prieto. The influence of texture symmetry in marker pointing: experimenting with humans and algorithms / M. Cardaci, M. E. Tabacchi. A multiscale autocorrelation function for anisotropy studies / M. Scuderi ... [et al.]. A multiscale, lacunarity and neural network method for [symbol]/h discrimination in extensive air showers / A. Pagliaro, F. D'anna, G. D'ali Staiti. Bayesian semi-parametric curve-fitting and clustering in SDSS data / S. Mukkhopadhyay, S. Roy, S. Bhattacharya.

  3. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  4. Efficient processing of fluorescence images using directional multiscale representations.

    PubMed

    Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M

    2014-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.

  5. Efficient processing of fluorescence images using directional multiscale representations

    PubMed Central

    Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.

    2017-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225

  6. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    PubMed

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  7. Using Atomic Force Microscopy to Characterize the Conformational Properties of Proteins and Protein-DNA Complexes That Carry Out DNA Repair.

    PubMed

    LeBlanc, Sharonda; Wilkins, Hunter; Li, Zimeng; Kaur, Parminder; Wang, Hong; Erie, Dorothy A

    2017-01-01

    Atomic force microscopy (AFM) is a scanning probe technique that allows visualization of single biomolecules and complexes deposited on a surface with nanometer resolution. AFM is a powerful tool for characterizing protein-protein and protein-DNA interactions. It can be used to capture snapshots of protein-DNA solution dynamics, which in turn, enables the characterization of the conformational properties of transient protein-protein and protein-DNA interactions. With AFM, it is possible to determine the stoichiometries and binding affinities of protein-protein and protein-DNA associations, the specificity of proteins binding to specific sites on DNA, and the conformations of the complexes. We describe methods to prepare and deposit samples, including surface treatments for optimal depositions, and how to quantitatively analyze images. We also discuss a new electrostatic force imaging technique called DREEM, which allows the visualization of the path of DNA within proteins in protein-DNA complexes. Collectively, these methods facilitate the development of comprehensive models of DNA repair and provide a broader understanding of all protein-protein and protein-nucleic acid interactions. The structural details gleaned from analysis of AFM images coupled with biochemistry provide vital information toward establishing the structure-function relationships that govern DNA repair processes. © 2017 Elsevier Inc. All rights reserved.

  8. Sparse models for correlative and integrative analysis of imaging and genetic data

    PubMed Central

    Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.

    2014-01-01

    The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561

  9. A modeling analysis program for the JPL Table Mountain Io sodium cloud data

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Goldberg, B. A.

    1986-01-01

    Progress and achievements in the second year are discussed in three main areas: (1) data quality review of the 1981 Region B/C images; (2) data processing activities; and (3) modeling activities. The data quality review revealed that almost all 1981 Region B/C images are of sufficient quality to be valuable in the analyses of the JPL data set. In the second area, the major milestone reached was the successful development and application of complex image-processing software required to render the original image data suitable for modeling analysis studies. In the third area, the lifetime description of sodium atoms in the planet magnetosphere was improved in the model to include the offset dipole nature of the magnetic field as well as an east-west electric field. These improvements are important in properly representing the basic morphology as well as the east-west asymmetries of the sodium cloud.

  10. User's manual for University of Arizona APART program (Analysis Program - Arizona Radiation Trace)

    NASA Technical Reports Server (NTRS)

    Breault, R. P.

    1975-01-01

    A description and operating instructions for the Analysis Program Arizona Radiation Trace (APART) are given. This is a computer program that is able to efficiently and accurately predict the off-axis rejection characteristics of unwanted stray radiation for complex rotationally symmetric optical systems. The program first determines the critical objects or areas that scatter radiation to the image plane either directly or through imaging elements: this provides the opportunity to modify, if necessary, the design so that the number of critical areas seen by the image plane is reduced or the radiation to these critical areas is minimized. Next, the power distribution reaching the image plane and a sectional power map of all internal surfaces are computed. Angular information is also provided that relates the angle by which the radiation came into a surface to the angle by which the radiation is scattered out of the surface.

  11. Spotsizer: High-throughput quantitative analysis of microbial growth.

    PubMed

    Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg

    2016-10-01

    Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.

  12. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D Soil Images. Nonlinear Process in Geophysics, 15, 881-891, 2008. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.

  13. Tissue polarimetry: concepts, challenges, applications, and outlook.

    PubMed

    Ghosh, Nirmalya; Vitkin, I Alex

    2011-11-01

    Polarimetry has a long and successful history in various forms of clear media. Driven by their biomedical potential, the use of the polarimetric approaches for biological tissue assessment has also recently received considerable attention. Specifically, polarization can be used as an effective tool to discriminate against multiply scattered light (acting as a gating mechanism) in order to enhance contrast and to improve tissue imaging resolution. Moreover, the intrinsic tissue polarimetry characteristics contain a wealth of morphological and functional information of potential biomedical importance. However, in a complex random medium-like tissue, numerous complexities due to multiple scattering and simultaneous occurrences of many scattering and polarization events present formidable challenges both in terms of accurate measurements and in terms of analysis of the tissue polarimetry signal. In order to realize the potential of the polarimetric approaches for tissue imaging and characterization/diagnosis, a number of researchers are thus pursuing innovative solutions to these challenges. In this review paper, we summarize these and other issues pertinent to the polarized light methodologies in tissues. Specifically, we discuss polarized light basics, Stokes-Muller formalism, methods of polarization measurements, polarized light modeling in turbid media, applications to tissue imaging, inverse analysis for polarimetric results quantification, applications to quantitative tissue assessment, etc.

  14. Nanobodies: site-specific labeling for super-resolution imaging, rapid epitope-mapping and native protein complex isolation

    PubMed Central

    Pleiner, Tino; Bates, Mark; Trakhanov, Sergei; Lee, Chung-Tien; Schliep, Jan Erik; Chug, Hema; Böhning, Marc; Stark, Holger; Urlaub, Henning; Görlich, Dirk

    2015-01-01

    Nanobodies are single-domain antibodies of camelid origin. We generated nanobodies against the vertebrate nuclear pore complex (NPC) and used them in STORM imaging to locate individual NPC proteins with <2 nm epitope-label displacement. For this, we introduced cysteines at specific positions in the nanobody sequence and labeled the resulting proteins with fluorophore-maleimides. As nanobodies are normally stabilized by disulfide-bonded cysteines, this appears counterintuitive. Yet, our analysis showed that this caused no folding problems. Compared to traditional NHS ester-labeling of lysines, the cysteine-maleimide strategy resulted in far less background in fluorescence imaging, it better preserved epitope recognition and it is site-specific. We also devised a rapid epitope-mapping strategy, which relies on crosslinking mass spectrometry and the introduced ectopic cysteines. Finally, we used different anti-nucleoporin nanobodies to purify the major NPC building blocks – each in a single step, with native elution and, as demonstrated, in excellent quality for structural analysis by electron microscopy. The presented strategies are applicable to any nanobody and nanobody-target. DOI: http://dx.doi.org/10.7554/eLife.11349.001 PMID:26633879

  15. A statistical image analysis framework for pore-free islands derived from heterogeneity distribution of nuclear pore complexes.

    PubMed

    Mimura, Yasuhiro; Takemoto, Satoko; Tachibana, Taro; Ogawa, Yutaka; Nishimura, Masaomi; Yokota, Hideo; Imamoto, Naoko

    2017-11-24

    Nuclear pore complexes (NPCs) maintain cellular homeostasis by mediating nucleocytoplasmic transport. Although cyclin-dependent kinases (CDKs) regulate NPC assembly in interphase, the location of NPC assembly on the nuclear envelope is not clear. CDKs also regulate the disappearance of pore-free islands, which are nuclear envelope subdomains; this subdomain gradually disappears with increase in homogeneity of the NPC in response to CDK activity. However, a causal relationship between pore-free islands and NPC assembly remains unclear. Here, we elucidated mechanisms underlying NPC assembly from a new perspective by focusing on pore-free islands. We proposed a novel framework for image-based analysis to automatically determine the detailed 'landscape' of pore-free islands from a large quantity of images, leading to the identification of NPC intermediates that appear in pore-free islands with increased frequency in response to CDK activity. Comparison of the spatial distribution between simulated and the observed NPC intermediates within pore-free islands showed that their distribution was spatially biased. These results suggested that the disappearance of pore-free islands is highly related to de novo NPC assembly and indicated the existence of specific regulatory mechanisms for the spatial arrangement of NPC assembly on nuclear envelopes.

  16. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods

    PubMed Central

    Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.

    2012-01-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394

  17. Breast MRI radiogenomics: Current status and research implications.

    PubMed

    Grimm, Lars J

    2016-06-01

    Breast magnetic resonance imaging (MRI) radiogenomics is an emerging area of research that has the potential to directly influence clinical practice. Clinical MRI scanners today are capable of providing excellent temporal and spatial resolution, which allows extraction of numerous imaging features via human extraction approaches or complex computer vision algorithms. Meanwhile, advances in breast cancer genetics research has resulted in the identification of promising genes associated with cancer outcomes. In addition, validated genomic signatures have been developed that allow categorization of breast cancers into distinct molecular subtypes as well as predict the risk of cancer recurrence and response to therapy. Current radiogenomics research has been directed towards exploratory analysis of individual genes, understanding tumor biology, and developing imaging surrogates to genetic analysis with the long-term goal of developing a meaningful tool for clinical care. The background of breast MRI radiogenomics research, image feature extraction techniques, approaches to radiogenomics research, and promising areas of investigation are reviewed. J. Magn. Reson. Imaging 2016;43:1269-1278. © 2015 Wiley Periodicals, Inc.

  18. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods.

    PubMed

    Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P

    2012-06-05

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.

  19. Quantitative x-ray phase-contrast imaging using a single grating of comparable pitch to sample feature size.

    PubMed

    Morgan, Kaye S; Paganin, David M; Siu, Karen K W

    2011-01-01

    The ability to quantitatively retrieve transverse phase maps during imaging by using coherent x rays often requires a precise grating or analyzer-crystal-based setup. Imaging of live animals presents further challenges when these methods require multiple exposures for image reconstruction. We present a simple method of single-exposure, single-grating quantitative phase contrast for a regime in which the grating period is much greater than the effective pixel size. A grating is used to create a high-visibility reference pattern incident on the sample, which is distorted according to the complex refractive index and thickness of the sample. The resolution, along a line parallel to the grating, is not restricted by the grating spacing, and the detector resolution becomes the primary determinant of the spatial resolution. We present a method of analysis that maps the displacement of interrogation windows in order to retrieve a quantitative phase map. Application of this analysis to the imaging of known phantoms shows excellent correspondence.

  20. Synthesis, Characterization, and Handling of Eu(II)-Containing Complexes for Molecular Imaging Applications

    NASA Astrophysics Data System (ADS)

    Basal, Lina A.; Allen, Matthew J.

    2018-03-01

    Considerable research effort has focused on the in vivo use of responsive imaging probes that change imaging properties upon reacting with oxygen because hypoxia is relevant to diagnosing, treating, and monitoring diseases. One promising class of compounds for oxygen-responsive imaging is Eu(II)-containing complexes because the Eu(II/III) redox couple enables imaging with multiple modalities including magnetic resonance and photoacoustic imaging. The use of Eu(II) requires care in handling to avoid unintended oxidation during synthesis and characterization. This review describes recent advances in the field of imaging agents based on discrete Eu(II)-containing complexes with specific focus on the synthesis, characterization, and handling of aqueous Eu(II)-containing complexes.

  1. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns.

    PubMed

    Street, Nichola; Forsythe, Alexandra M; Reilly, Ronan; Taylor, Richard; Helmy, Mai S

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics.

  2. Composition of the cellular infiltrate in patients with simple and complex appendicitis.

    PubMed

    Gorter, Ramon R; Wassenaar, Emma C E; de Boer, Onno J; Bakx, Roel; Roelofs, Joris J T H; Bunders, Madeleine J; van Heurn, L W Ernst; Heij, Hugo A

    2017-06-15

    It is now well established that there are two types of appendicitis: simple (nonperforating) and complex (perforating). This study evaluates differences in the composition of the immune cellular infiltrate in children with simple and complex appendicitis. A total of 47 consecutive children undergoing appendectomy for acute appendicitis between January 2011 and December 2012 were included. Intraoperative criteria were used to identify patients with either simple or complex appendicitis and were confirmed histopathologically. Immune histochemical techniques were used to identify immune cell markers in the appendiceal specimens. Digital imaging analysis was performed using Image J. In the specimens of patients with complex appendicitis, significantly more myeloperoxidase positive cells (neutrophils) (8.7% versus 1.2%, P < 0.001) were detected compared to patients with a simple appendicitis. In contrast, fewer CD8+ T cells (0.4% versus 1.3%, P = 0.016), CD20 + cells (2.9% versus 9.0%, P = 0.027), and CD21 + cells (0.2% versus 0.6%, P = 0.028) were present in tissue from patients with complex compared to simple appendicitis. The increase in proinflammatory innate cells and decrease of adaptive cells in patients with complex appendicitis suggest potential aggravating processes in complex appendicitis. Further research into the underlying mechanisms may identify novel biomarkers to be able to differentiate simple and complex appendicitis. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. High Performance Computing and Cutting-Edge Analysis Can Open New

    Science.gov Websites

    Realms March 1, 2018 Two people looking at a 3D interactive graphical data the Visualization Center in capabilities to visualize complex, 3D images of the wakes from multiple wind turbines so that we can better

  4. Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes

    PubMed Central

    Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin

    2012-01-01

    Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993

  5. Image Patch Analysis of Sunspots and Active Regions

    NASA Astrophysics Data System (ADS)

    Moon, K.; Delouille, V.; Hero, A.

    2017-12-01

    The flare productivity of an active region has been observed to be related to its spatial complexity. Separating active regions that are quiet from potentially eruptive ones is a key issue in space weather applications. Traditional classification schemes such as Mount Wilson and McIntosh have been effective in relating an active region large scale magnetic configuration to its ability to produce eruptive events. However, their qualitative nature does not use all of the information present in the observations. In our work, we present an image patch analysis for characterizing sunspots and active regions. We first propose fine-scale quantitative descriptors for an active region's complexity such as intrinsic dimension, and we relate them to the Mount Wilson classification. Second, we introduce a new clustering of active regions that is based on the local geometry observed in Line of Sight magnetogram and continuum images. To obtain this local geometry, we use a reduced-dimension representation of an active region that is obtained by factoring the corresponding data matrix comprised of local image patches using the singular value decomposition. The resulting factorizations of active regions can be compared via the definition of appropriate metrics on the factors. The distances obtained from these metrics are then used to cluster the active regions. Results. We find that these metrics result in natural clusterings of active regions. The clusterings are related to large scale descriptors of an active region such as its size, its local magnetic field distribution, and its complexity as measured by the Mount Wilson classification scheme. We also find that including data focused on the neutral line of an active region can result in an increased correspondence between our clustering results and other active region descriptors such as the Mount Wilson classifications and the R-value.

  6. Present and future in the use of micro-CT scanner 3D analysis for the study of dental and root canal morphology.

    PubMed

    Grande, Nicola M; Plotino, Gianluca; Gambarini, Gianluca; Testarelli, Luca; D'Ambrosio, Ferdinando; Pecci, Raffaella; Bedini, Rossella

    2012-01-01

    The goal of the present article is to illustrate and analyze the applications and the potential of microcomputed tomography (micro-CT) in the analysis of tooth anatomy and root canal morphology. The authors performed a micro-CT analysis of the following different teeth: maxillary first molars with a second canal in the mesiobuccal (MB) root, mandibular first molars with complex anatomy in the mesial root, premolars with single and double roots and with complicated apical anatomy. The hardware device used in this study was a desktop X-ray microfocus CT scanner (SkyScan 1072, SkyScan bvba, Aartselaar, Belgium). A specific software ResolveRT Amira (Visage Imaging) was used for the 3D analysis and imaging. The authors obtained three-dimensional images from 15 teeth. It was possible to precisely visualize and analyze external and internal anatomy of teeth, showing the finest details. Among the 5 upper molars analyzed, in three cases, the MB canals joined into one canal, while in the other two molars the two mesial canals were separate. Among the lower molars two of the five samples exhibited a single canal in the mesial root, which had a broad, flat appearance in a mesiodistal dimension. In the five premolar teeth, the canals were independent; however, the apical delta and ramifications of the root canals were quite complex. Micro-CT offers a simple and reproducible technique for 3D noninvasive assessment of the anatomy of root canal systems.

  7. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  8. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  9. Adaptive optics for in-vivo exploration of human retinal structures

    NASA Astrophysics Data System (ADS)

    Paques, Michel; Meimon, Serge; Grieve, Kate; Rossant, Florence

    2017-06-01

    Adaptive optics (AO)-enhanced imaging of the retina is now reaching a level of technical maturity which fosters its expanding use in research and clinical centers in the world. By achieving wavelength-limited resolution it did not only allow a better observation of retinal substructures already visible by other means, it also broke anatomical frontiers such as individual photoreceptors or vessel walls. The clinical applications of AO-enhanced imaging has been slower than that of optical coherence tomography because of the combination of technical complexity, costs and the paucity of interpretative scheme of complex data. In several diseases, AO-enhanced imaging has already proven to provide added clinical value and quantitative biomarkers. Here, we will review some of the clinical applications of AO-enhanced en face imaging, and trace perspectives to improve its clinical pertinence in these applications. An interesting perspective is to document cell motion through time-lapse imaging such as during agerelated macular degeneration. In arterial hypertension, the possibility to measure parietal thickness and perform fine morphometric analysis is of interest for monitoring patients. In the near future, implementation of novel approaches and multimodal imaging, including in particular optical coherence tomography, will undoubtedly expand our imaging capabilities. Tackling the technical, scientific and medical challenges offered by high resolution imaging are likely to contribute to our rethinking of many retinal diseases, and, most importantly, may find applications in other areas of medicine.

  10. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  11. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  12. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  13. Enhanced cellulose orientation analysis in complex model plant tissues.

    PubMed

    Rüggeberg, Markus; Saxe, Friederike; Metzger, Till H; Sundberg, Björn; Fratzl, Peter; Burgert, Ingo

    2013-09-01

    The orientation distribution of cellulose microfibrils in the plant cell wall is a key parameter for understanding anisotropic plant growth and mechanical behavior. However, precisely visualizing cellulose orientation in the plant cell wall has ever been a challenge due to the small size of the cellulose microfibrils and the complex network of polymers in the plant cell wall. X-ray diffraction is one of the most frequently used methods for analyzing cellulose orientation in single cells and plant tissues, but the interpretation of the diffraction images is complex. Traditionally, circular or square cells and Gaussian orientation of the cellulose microfibrils have been assumed to elucidate cellulose orientation from the diffraction images. However, the complex tissue structures of common model plant systems such as Arabidopsis or aspen (Populus) require a more sophisticated approach. We present an evaluation procedure which takes into account the precise cell geometry and is able to deal with complex microfibril orientation distributions. The evaluation procedure reveals the entire orientation distribution of the cellulose microfibrils, reflecting different orientations within the multi-layered cell wall. By analyzing aspen wood and Arabidopsis stems we demonstrate the versatility of this method and show that simplifying assumptions on geometry and orientation distributions can lead to errors in the calculated microfibril orientation pattern. The simulation routine is intended to be used as a valuable tool for nanostructural analysis of plant cell walls and is freely available from the authors on request. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. He-Ion Microscopy as a High-Resolution Probe for Complex Quantum Heterostructures in Core-Shell Nanowires.

    PubMed

    Pöpsel, Christian; Becker, Jonathan; Jeon, Nari; Döblinger, Markus; Stettner, Thomas; Gottschalk, Yeanitza Trujillo; Loitsch, Bernhard; Matich, Sonja; Altzschner, Marcus; Holleitner, Alexander W; Finley, Jonathan J; Lauhon, Lincoln J; Koblmüller, Gregor

    2018-06-13

    Core-shell semiconductor nanowires (NW) with internal quantum heterostructures are amongst the most complex nanostructured materials to be explored for assessing the ultimate capabilities of diverse ultrahigh-resolution imaging techniques. To probe the structure and composition of these materials in their native environment with minimal damage and sample preparation calls for high-resolution electron or ion microscopy methods, which have not yet been tested on such classes of ultrasmall quantum nanostructures. Here, we demonstrate that scanning helium ion microscopy (SHeIM) provides a powerful and straightforward method to map quantum heterostructures embedded in complex III-V semiconductor NWs with unique material contrast at ∼1 nm resolution. By probing the cross sections of GaAs-Al(Ga)As core-shell NWs with coaxial GaAs quantum wells as well as short-period GaAs/AlAs superlattice (SL) structures in the shell, the Al-rich and Ga-rich layers are accurately discriminated by their image contrast in excellent agreement with correlated, yet destructive, scanning transmission electron microscopy and atom probe tomography analysis. Most interestingly, quantitative He-ion dose-dependent SHeIM analysis of the ternary AlGaAs shell layers and of compositionally nonuniform GaAs/AlAs SLs reveals distinct alloy composition fluctuations in the form of Al-rich clusters with size distributions between ∼1-10 nm. In the GaAs/AlAs SLs the alloy clustering vanishes with increasing SL-period (>5 nm-GaAs/4 nm-AlAs), providing insights into critical size dimensions for atomic intermixing effects in short-period SLs within a NW geometry. The straightforward SHeIM technique therefore provides unique benefits in imaging the tiniest nanoscale features in topography, structure and composition of a multitude of diverse complex semiconductor nanostructures.

  15. Multifractal spectrum and lacunarity as measures of complexity of osseointegration.

    PubMed

    de Souza Santos, Daniel; Dos Santos, Leonardo Cavalcanti Bezerra; de Albuquerque Tavares Carvalho, Alessandra; Leão, Jair Carneiro; Delrieux, Claudio; Stosic, Tatijana; Stosic, Borko

    2016-07-01

    The goal of this study is to contribute to a better quantitative description of the early stages of osseointegration, by application of fractal, multifractal, and lacunarity analysis. Fractal, multifractal, and lacunarity analysis are performed on scanning electron microscopy (SEM) images of titanium implants that were first subjected to different treatment combinations of i) sand blasting, ii) acid etching, and iii) exposition to calcium phosphate, and were then submersed in a simulated body fluid (SBF) for 30 days. All the three numerical techniques are applied to the implant SEM images before and after SBF immersion, in order to provide a comprehensive set of common quantitative descriptors. It is found that implants subjected to different physicochemical treatments before submersion in SBF exhibit a rather similar level of complexity, while the great variety of crystal forms after SBF submersion reveals rather different quantitative measures (reflecting complexity), for different treatments. In particular, it is found that acid treatment, in most combinations with the other considered treatments, leads to a higher fractal dimension (more uniform distribution of crystals), lower lacunarity (lesser variation in gap sizes), and narrowing of the multifractal spectrum (smaller fluctuations on different scales). The current quantitative description has shown the capacity to capture the main features of complex images of implant surfaces, for several different treatments. Such quantitative description should provide a fundamental tool for future large scale systematic studies, considering the large variety of possible implant treatments and their combinations. Quantitative description of early stages of osseointegration on titanium implants with different treatments should help develop a better understanding of this phenomenon, in general, and provide basis for further systematic experimental studies. Clinical practice should benefit from such studies in the long term, by more ready access to implants of higher quality.

  16. MFDFA and Lacunarity Analysis of Synthetic Multifractals and Pre-Cancerous Tissues

    NASA Astrophysics Data System (ADS)

    Roy, A.; Das, N.; Ghosh, N.

    2017-12-01

    Multifractal Detrended Fluctuation Analysis (MFDFA) has been employed for evaluating complex variations in the refractive index (RI) of human pre-cancerous tissues. While this was primarily aimed towards the early diagnosis of cancer in the human cervix, question remains whether multifractal analysis alone can be conclusively used for distinguishing between various grades of pre-cancerous tissues. Lacunarity is a parameter that was developed for multiscale analysis of data and has been shown to be theoretically related to the correlation dimension, D2, by dlog(L)/dlog(x) = D2 - 2. Further, research has proven that not only can Lacunarity be used as a preliminary indicator of multifractal behavior but it also distinguishes between images with similar correlation dimension values. In order to compare the efficacy of the two approaches namely, MFDFA and Lacunarity, in distinguishing between pre-cancerous tissues of various grades, we test these techniques on a set of 2-dimensional theoretical random multifractal fields. MFDFA is employed for computing the width of the singularity spectrum f(α), which is a measure of multifractal behavior. A weighted mean of the log-transformed lacunarity values at different scales is employed for differentiating between patterns with the same correlation dimension but differences in texture. The two different techniques are then applied to images containing RI values of biopsy samples from human cervical tissues that were histo-pathologically characterized as grade-I and grade-II pre-cancerous cells. The results show that the two approaches are complementary to one another when it comes to multi-scale analysis of complex natural patterns manifested in the images of such pre-cancerous cells.

  17. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  18. GeoPAT: A toolbox for pattern-based information retrieval from large geospatial databases

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Jarosław; Netzel, Paweł; Stepinski, Tomasz

    2015-07-01

    Geospatial Pattern Analysis Toolbox (GeoPAT) is a collection of GRASS GIS modules for carrying out pattern-based geospatial analysis of images and other spatial datasets. The need for pattern-based analysis arises when images/rasters contain rich spatial information either because of their very high resolution or their very large spatial extent. Elementary units of pattern-based analysis are scenes - patches of surface consisting of a complex arrangement of individual pixels (patterns). GeoPAT modules implement popular GIS algorithms, such as query, overlay, and segmentation, to operate on the grid of scenes. To achieve these capabilities GeoPAT includes a library of scene signatures - compact numerical descriptors of patterns, and a library of distance functions - providing numerical means of assessing dissimilarity between scenes. Ancillary GeoPAT modules use these functions to construct a grid of scenes or to assign signatures to individual scenes having regular or irregular geometries. Thus GeoPAT combines knowledge retrieval from patterns with mapping tasks within a single integrated GIS environment. GeoPAT is designed to identify and analyze complex, highly generalized classes in spatial datasets. Examples include distinguishing between different styles of urban settlements using VHR images, delineating different landscape types in land cover maps, and mapping physiographic units from DEM. The concept of pattern-based spatial analysis is explained and the roles of all modules and functions are described. A case study example pertaining to delineation of landscape types in a subregion of NLCD is given. Performance evaluation is included to highlight GeoPAT's applicability to very large datasets. The GeoPAT toolbox is available for download from

  19. Single-Molecule Analysis for RISC Assembly and Target Cleavage.

    PubMed

    Sasaki, Hiroshi M; Tadakuma, Hisashi; Tomari, Yukihide

    2018-01-01

    RNA-induced silencing complex (RISC) is a small RNA-protein complex that mediates silencing of complementary target RNAs. Biochemistry has been successfully used to characterize the molecular mechanism of RISC assembly and function for nearly two decades. However, further dissection of intermediate states during the reactions has been warranted to fill in the gaps in our understanding of RNA silencing mechanisms. Single-molecule analysis with total internal reflection fluorescence (TIRF) microscopy is a powerful imaging-based approach to interrogate complex formation and dynamics at the individual molecule level with high sensitivity. Combining this technique with our recently established in vitro reconstitution system of fly Ago2-RISC, we have developed a single-molecule observation system for RISC assembly. In this chapter, we summarize the detailed protocol for single-molecule analysis of chaperone-assisted assembly of fly Ago2-RISC as well as its target cleavage reaction.

  20. STAR FORMATION ACROSS THE W3 COMPLEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Román-Zúñiga, Carlos G.; Ybarra, Jason E.; Tapia, Mauricio

    We present a multi-wavelength analysis of the history of star formation in the W3 complex. Using deep, near-infrared ground-based images combined with images obtained with Spitzer and Chandra observatories, we identified and classified young embedded sources. We identified the principal clusters in the complex and determined their structure and extension. We constructed extinction-limited samples for five principal clusters and constructed K-band luminosity functions that we compare with those of artificial clusters with varying ages. This analysis provided mean ages and possible age spreads for the clusters. We found that IC 1795, the centermost cluster of the complex, still hosts amore » large fraction of young sources with circumstellar disks. This indicates that star formation was active in IC 1795 as recently as 2 Myr ago, simultaneous to the star-forming activity in the flanking embedded clusters, W3-Main and W3(OH). A comparison with carbon monoxide emission maps indicates strong velocity gradients in the gas clumps hosting W3-Main and W3(OH) and shows small receding clumps of gas at IC 1795, suggestive of rapid gas removal (faster than the T Tauri timescale) in the cluster-forming regions. We discuss one possible scenario for the progression of cluster formation in the W3 complex. We propose that early processes of gas collapse in the main structure of the complex could have defined the progression of cluster formation across the complex with relatively small age differences from one group to another. However, triggering effects could act as catalysts for enhanced efficiency of formation at a local level, in agreement with previous studies.« less

  1. Multiscale morphological filtering for analysis of noisy and complex images

    NASA Astrophysics Data System (ADS)

    Kher, A.; Mitra, S.

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  2. Multiscale Morphological Filtering for Analysis of Noisy and Complex Images

    NASA Technical Reports Server (NTRS)

    Kher, A.; Mitra, S.

    1993-01-01

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  3. Analysis of PETT images in psychiatric disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  4. Results from the Mars Pathfinder camera.

    PubMed

    Smith, P H; Bell, J F; Bridges, N T; Britt, D T; Gaddis, L; Greeley, R; Keller, H U; Herkenhoff, K E; Jaumann, R; Johnson, J R; Kirk, R L; Lemmon, M; Maki, J N; Malin, M C; Murchie, S L; Oberst, J; Parker, T J; Reid, R J; Sablotny, R; Soderblom, L A; Stoker, C; Sullivan, R; Thomas, N; Tomasko, M G; Wegryn, E

    1997-12-05

    Images of the martian surface returned by the Imager for Mars Pathfinder (IMP) show a complex surface of ridges and troughs covered by rocks that have been transported and modified by fluvial, aeolian, and impact processes. Analysis of the spectral signatures in the scene (at 440- to 1000-nanometer wavelength) reveal three types of rock and four classes of soil. Upward-looking IMP images of the predawn sky show thin, bluish clouds that probably represent water ice forming on local atmospheric haze (opacity approximately 0.5). Haze particles are about 1 micrometer in radius and the water vapor column abundance is about 10 precipitable micrometers.

  5. High Definition Confocal Imaging Modalities for the Characterization of Tissue-Engineered Substitutes.

    PubMed

    Mayrand, Dominique; Fradette, Julie

    2018-01-01

    Optimal imaging methods are necessary in order to perform a detailed characterization of thick tissue samples from either native or engineered tissues. Tissue-engineered substitutes are featuring increasing complexity including multiple cell types and capillary-like networks. Therefore, technical approaches allowing the visualization of the inner structural organization and cellular composition of tissues are needed. This chapter describes an optical clearing technique which facilitates the detailed characterization of whole-mount samples from skin and adipose tissues (ex vivo tissues and in vitro tissue-engineered substitutes) when combined with spectral confocal microscopy and quantitative analysis on image renderings.

  6. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response

    NASA Astrophysics Data System (ADS)

    Nikiforov, M. P.; Reukov, V. V.; Thompson, G. L.; Vertegel, A. A.; Guo, S.; Kalinin, S. V.; Jesse, S.

    2009-10-01

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  7. SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process

    NASA Astrophysics Data System (ADS)

    Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres

    2015-10-01

    In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.

  8. All you need to know about the tricuspid valve: Tricuspid valve imaging and tricuspid regurgitation analysis.

    PubMed

    Huttin, Olivier; Voilliot, Damien; Mandry, Damien; Venner, Clément; Juillière, Yves; Selton-Suty, Christine

    2016-01-01

    The acknowledgment of tricuspid regurgitation (TR) as a stand-alone and progressive entity, worsening the prognosis of patients whatever its aetiology, has led to renewed interest in the tricuspid-right ventricular complex. The tricuspid valve (TV) is a complex, dynamic and changing structure. As the TV is not easy to analyse, three-dimensional imaging, cardiac magnetic resonance imaging and computed tomography scans may add to two-dimensional transthoracic and transoesophageal echocardiographic data in the analysis of TR. Not only the severity of TR, but also its mechanisms, the mode of leaflet coaptation, the degree of tricuspid annulus enlargement and tenting, and the haemodynamic consequences for right atrial and right ventricular morphology and function have to be taken into account. TR is functional and is a satellite of left-sided heart disease and/or elevated pulmonary artery pressure most of the time; a particular form is characterized by TR worsening after left-sided valve surgery, which has been shown to impair patient prognosis. A better description of TV anatomy and function by multimodality imaging should help with the appropriate selection of patients who will benefit from either surgical TV repair/replacement or a percutaneous procedure for TR, especially among patients who are to undergo or have undergone primary left-sided valvular surgery. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  9. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  10. Advanced forensic validation for human spermatozoa identification using SPERM HY-LITER™ Express with quantitative image analysis.

    PubMed

    Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko

    2017-07-01

    Identification of human semen is indispensable for the investigation of sexual assaults. Fluorescence staining methods using commercial kits, such as the series of SPERM HY-LITER™ kits, have been useful to detect human sperm via strong fluorescence. These kits have been examined from various forensic aspects. However, because of a lack of evaluation methods, these studies did not provide objective, or quantitative, descriptions of the results nor clear criteria for the decisions reached. In addition, the variety of validations was considerably limited. In this study, we conducted more advanced validations of SPERM HY-LITER™ Express using our established image analysis method. Use of this method enabled objective and specific identification of fluorescent sperm's spots and quantitative comparisons of the sperm detection performance under complex experimental conditions. For body fluid mixtures, we examined interference with the fluorescence staining from other body fluid components. Effects of sample decomposition were simulated in high humidity and high temperature conditions. Semen with quite low sperm concentrations, such as azoospermia and oligospermia samples, represented the most challenging cases in application of the kit. Finally, the tolerance of the kit against various acidic and basic environments was analyzed. The validations herein provide useful information for the practical applications of the SPERM HY-LITER™ Express kit, which were previously unobtainable. Moreover, the versatility of our image analysis method toward various complex cases was demonstrated.

  11. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  12. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  13. Applications of wavelets in morphometric analysis of medical images

    NASA Astrophysics Data System (ADS)

    Davatzikos, Christos; Tao, Xiaodong; Shen, Dinggang

    2003-11-01

    Morphometric analysis of medical images is playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development, aging and pathology. This paper presents three wavelet-based methods with related applications in morphometric analysis of magnetic resonance (MR) brain images. The first method handles cases where very limited datasets are available for the training of statistical shape models in the deformable segmentation. The method is capable of capturing a larger range of shape variability than the standard active shape models (ASMs) can, by using the elegant spatial-frequency decomposition of the shape contours provided by wavelet transforms. The second method addresses the difficulty of finding correspondences in anatomical images, which is a key step in shape analysis and deformable registration. The detection of anatomical correspondences is completed by using wavelet-based attribute vectors as morphological signatures of voxels. The third method uses wavelets to characterize the morphological measurements obtained from all voxels in a brain image, and the entire set of wavelet coefficients is further used to build a brain classifier. Since the classification scheme operates in a very-high-dimensional space, it can determine subtle population differences with complex spatial patterns. Experimental results are provided to demonstrate the performance of the proposed methods.

  14. Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft

    NASA Astrophysics Data System (ADS)

    Nichols, T. W.

    Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.

  15. Neurocognitive inefficacy of the strategy process.

    PubMed

    Klein, Harold E; D'Esposito, Mark

    2007-11-01

    The most widely used (and taught) protocols for strategic analysis-Strengths, Weaknesses, Opportunities, and Threats (SWOT) and Porter's (1980) Five Force Framework for industry analysis-have been found to be insufficient as stimuli for strategy creation or even as a basis for further strategy development. We approach this problem from a neurocognitive perspective. We see profound incompatibilities between the cognitive process-deductive reasoning-channeled into the collective mind of strategists within the formal planning process through its tools of strategic analysis (i.e., rational technologies) and the essentially inductive reasoning process actually needed to address ill-defined, complex strategic situations. Thus, strategic analysis protocols that may appear to be and, indeed, are entirely rational and logical are not interpretable as such at the neuronal substrate level where thinking takes place. The analytical structure (or propositional representation) of these tools results in a mental dead end, the phenomenon known in cognitive psychology as functional fixedness. The difficulty lies with the inability of the brain to make out meaningful (i.e., strategy-provoking) stimuli from the mental images (or depictive representations) generated by strategic analysis tools. We propose decreasing dependence on these tools and conducting further research employing brain imaging technology to explore complex data handling protocols with richer mental representation and greater potential for strategy creation.

  16. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  17. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  18. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  19. Multimodal biophotonic workstation for live cell analysis.

    PubMed

    Esseling, Michael; Kemper, Björn; Antkowiak, Maciej; Stevenson, David J; Chaudet, Lionel; Neil, Mark A A; French, Paul W; von Bally, Gert; Dholakia, Kishan; Denz, Cornelia

    2012-01-01

    A reliable description and quantification of the complex physiology and reactions of living cells requires a multimodal analysis with various measurement techniques. We have investigated the integration of different techniques into a biophotonic workstation that can provide biological researchers with these capabilities. The combination of a micromanipulation tool with three different imaging principles is accomplished in a single inverted microscope which makes the results from all the techniques directly comparable. Chinese Hamster Ovary (CHO) cells were manipulated by optical tweezers while the feedback was directly analyzed by fluorescence lifetime imaging, digital holographic microscopy and dynamic phase-contrast microscopy. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Finding text in color images

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  1. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  2. A Multiscale Vision Model applied to analyze EIT images of the solar corona

    NASA Astrophysics Data System (ADS)

    Portier-Fozzani, F.; Vandame, B.; Bijaoui, A.; Maucherat, A. J.; EIT Team

    2001-07-01

    The large dynamic range provided by the SOHO/EIT CCD (1 : 5000) is needed to observe the large EUV zoom of coronal structures from coronal homes up to flares. Histograms show that often a wide dynamic range is present in each image. Extracting hidden structures in the background level requires specific techniques such as the use of the Multiscale Vision Model (MVM, Bijaoui et al., 1998). This method, based on wavelet transformations optimizes detection of various size objects, however complex they may be. Bijaoui et al. built the Multiscale Vision Model to extract small dynamical structures from noise, mainly for studying galaxies. In this paper, we describe requirements for the use of this method with SOHO/EIT images (calibration, size of the image, dynamics of the subimage, etc.). Two different areas were studied revealing hidden structures: (1) classical coronal mass ejection (CME) formation and (2) a complex group of active regions with its evolution. The aim of this paper is to define carefully the constraints for this new method of imaging the solar corona with SOHO/EIT. Physical analysis derived from multi-wavelength observations will later complete these first results.

  3. Fractality of pulsatile flow in speckle images

    NASA Astrophysics Data System (ADS)

    Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.

    2016-05-01

    The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.

  4. Design on the x-ray oral digital image display card

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Gu, Guohua; Chen, Qian

    2009-10-01

    According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.

  5. High content screening of ToxCast compounds using Vala Sciences’ complex cell culturing systems (SOT)

    EPA Science Inventory

    US EPA’s ToxCast research program evaluates bioactivity for thousands of chemicals utilizing high-throughput screening assays to inform chemical testing decisions. Vala Sciences provides high content, multiplexed assays that utilize quantitative cell-based digital image analysis....

  6. Quantitative analysis of autophagic flux by confocal pH-imaging of autophagic intermediates

    PubMed Central

    Maulucci, Giuseppe; Chiarpotto, Michela; Papi, Massimiliano; Samengo, Daniela; Pani, Giovambattista; De Spirito, Marco

    2015-01-01

    Although numerous techniques have been developed to monitor autophagy and to probe its cellular functions, these methods cannot evaluate in sufficient detail the autophagy process, and suffer limitations from complex experimental setups and/or systematic errors. Here we developed a method to image, contextually, the number and pH of autophagic intermediates by using the probe mRFP-GFP-LC3B as a ratiometric pH sensor. This information is expressed functionally by AIPD, the pH distribution of the number of autophagic intermediates per cell. AIPD analysis reveals how intermediates are characterized by a continuous pH distribution, in the range 4.5–6.5, and therefore can be described by a more complex set of states rather than the usual biphasic one (autophagosomes and autolysosomes). AIPD shape and amplitude are sensitive to alterations in the autophagy pathway induced by drugs or environmental states, and allow a quantitative estimation of autophagic flux by retrieving the concentrations of autophagic intermediates. PMID:26506895

  7. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Research on infrared dim-point target detection and tracking under sea-sky-line complex background

    NASA Astrophysics Data System (ADS)

    Dong, Yu-xing; Li, Yan; Zhang, Hai-bo

    2011-08-01

    Target detection and tracking technology in infrared image is an important part of modern military defense system. Infrared dim-point targets detection and recognition under complex background is a difficulty and important strategic value and challenging research topic. The main objects that carrier-borne infrared vigilance system detected are sea-skimming aircrafts and missiles. Due to the characteristics of wide field of view of vigilance system, the target is usually under the sea clutter. Detection and recognition of the target will be taken great difficulties .There are some traditional point target detection algorithms, such as adaptive background prediction detecting method. When background has dispersion-decreasing structure, the traditional target detection algorithms would be more useful. But when the background has large gray gradient, such as sea-sky-line, sea waves etc .The bigger false-alarm rate will be taken in these local area .It could not obtain satisfactory results. Because dim-point target itself does not have obvious geometry or texture feature ,in our opinion , from the perspective of mathematics, the detection of dim-point targets in image is about singular function analysis .And from the perspective image processing analysis , the judgment of isolated singularity in the image is key problem. The foregoing points for dim-point targets detection, its essence is a separation of target and background of different singularity characteristics .The image from infrared sensor usually accompanied by different kinds of noise. These external noises could be caused by the complicated background or from the sensor itself. The noise might affect target detection and tracking. Therefore, the purpose of the image preprocessing is to reduce the effects from noise, also to raise the SNR of image, and to increase the contrast of target and background. According to the low sea-skimming infrared flying small target characteristics , the median filter is used to eliminate noise, improve signal-to-noise ratio, then the multi-point multi-storey vertical Sobel algorithm will be used to detect the sea-sky-line ,so that we can segment sea and sky in the image. Finally using centroid tracking method to capture and trace target. This method has been successfully used to trace target under the sea-sky complex background.

  9. Spatial and spectral analysis of corneal epithelium injury using hyperspectral images

    NASA Astrophysics Data System (ADS)

    Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-12-01

    Eye assessment is essential in preventing blindness. Currently, the existing methods to assess corneal epithelium injury are complex and require expert knowledge. Hence, we have introduced a non-invasive technique using hyperspectral imaging (HSI) and an image analysis algorithm of corneal epithelium injury. Three groups of images were compared and analyzed, namely healthy eyes, injured eyes, and injured eyes with stain. Dimensionality reduction using principal component analysis (PCA) was applied to reduce massive data and redundancies. The first 10 principal components (PCs) were selected for further processing. The mean vector of 10 PCs with 45 pairs of all combinations was computed and sent to two classifiers. A quadratic Bayes normal classifier (QDC) and a support vector classifier (SVC) were used in this study to discriminate the eleven eyes into three groups. As a result, the combined classifier of QDC and SVC showed optimal performance with 2D PCA features (2DPCA-QDSVC) and was utilized to classify normal and abnormal tissues, using color image segmentation. The result was compared with human segmentation. The outcome showed that the proposed algorithm produced extremely promising results to assist the clinician in quantifying a cornea injury.

  10. Imaging Nicotine in Rat Brain Tissue by Use of Nanospray Desorption Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanekoff, Ingela T.; Thomas, Mathew; Carson, James P.

    Imaging mass spectrometry offers simultaneous detection of drugs, drug metabolites and endogenous substances in a single experiment. This is important when evaluating effects of a drug on a complex organ system such as the brain, where there is a need to understand how regional drug distribution impacts function. Nicotine is an addictive drug and its action in the brain is of high interest. Here we use nanospray desorption electrospray ionization, nano-DESI, imaging to discover the localization of nicotine in rat brain tissue after in vivo administration of nicotine. Nano-DESI is a new ambient technique that enables spatially-resolved analysis of tissuemore » samples without special sample pretreatment. We demonstrate high sensitivity of nano-DESI imaging that enables detection of only 0.7 fmole nicotine per pixel in the complex brain matrix. Furthermore, by adding deuterated nicotine to the solvent, we examined how matrix effects, ion suppression, and normalization affect the observed nicotine distribution. Finally, we provide preliminary results suggesting that nicotine localizes to the hippocampal substructure called dentate gyrus.« less

  11. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  12. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less

  13. CDS Re Mix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    CDS (Change Detection Systems) is a mechanism for rapid visual analysis using complex image alignment algorithms. CDS is controlled with a simple interface that has been designed for use for anyone that can operate a digital camera. A challenge of complex industrial systems like nuclear power plants is to accurately identify changes in systems, structures and components that may critically impact the operation of the facility. CDS can provide a means of early intervention before the issues evolve into safety and production challenges.

  14. EDITORIAL: Imaging Systems and Techniques Imaging Systems and Techniques

    NASA Astrophysics Data System (ADS)

    Giakos, George; Yang, Wuqiang; Petrou, M.; Nikita, K. S.; Pastorino, M.; Amanatiadis, A.; Zentai, G.

    2011-10-01

    This special feature on Imaging Systems and Techniques comprises 27 technical papers, covering essential facets in imaging systems and techniques both in theory and applications, from research groups spanning three different continents. It mainly contains peer-reviewed articles from the IEEE International Conference on Imaging Systems and Techniques (IST 2011), held in Thessaloniki, Greece, as well a number of articles relevant to the scope of this issue. The multifaceted field of imaging requires drastic adaptation to the rapid changes in our society, economy, environment, and the technological revolution; there is an urgent need to address and propose dynamic and innovative solutions to problems that tend to be either complex and static or rapidly evolving with a lot of unknowns. For instance, exploration of the engineering and physical principles of new imaging systems and techniques for medical applications, remote sensing, monitoring of space resources and enhanced awareness, exploration and management of natural resources, and environmental monitoring, are some of the areas that need to be addressed with urgency. Similarly, the development of efficient medical imaging techniques capable of providing physiological information at the molecular level is another important area of research. Advanced metabolic and functional imaging techniques, operating on multiple physical principles, using high resolution and high selectivity nanoimaging techniques, can play an important role in the diagnosis and treatment of cancer, as well as provide efficient drug-delivery imaging solutions for disease treatment with increased sensitivity and specificity. On the other hand, technical advances in the development of efficient digital imaging systems and techniques and tomographic devices operating on electric impedance tomography, computed tomography, single-photon emission and positron emission tomography detection principles are anticipated to have a significant impact on a wide spectrum of technological areas, such as medical imaging, pharmaceutical industry, analytical instrumentation, aerospace, remote sensing, lidars and ladars, surveillance, national defense, corrosion imaging and monitoring, sub-terrestrial and marine imaging. The complexity of the involved imaging scenarios, and demanding design parameters such as speed, signal-to-noise ratio, high specificity, high contrast and spatial resolution, high-scatter rejection, complex background and harsh environment, necessitate the development of a multifunctional, scalable and efficient imaging suite of sensors, solutions driven by innovation, operating on diverse detection and imaging principles. Finally, pattern recognition and image processing algorithms can significantly contribute to enhanced detection and imaging, including object classification, clustering, feature selection, texture analysis, segmentation, image compression and color representation under complex imaging scenarios, with applications in medical imaging, remote sensing, aerospace, radars, defense and homeland security. We feel confident that the exciting new contributions of this special feature on Imaging Systems and Techniques will appeal to the technical community. We would like to thank all authors as well as all anonymous reviewers and the MST Editorial Board, Publisher and staff for their tremendous efforts and invaluable support to enhance the quality of this significant endeavor.

  15. Referential processing: reciprocity and correlates of naming and imaging.

    PubMed

    Paivio, A; Clark, J M; Digdon, N; Bons, T

    1989-03-01

    To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing.

  16. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  17. HIV-1 Tat protein promotes formation of more-processive elongation complexes.

    PubMed Central

    Marciniak, R A; Sharp, P A

    1991-01-01

    The Tat protein of HIV-1 trans-activates transcription in vitro in a cell-free extract of HeLa nuclei. Quantitative analysis of the efficiency of elongation revealed that a majority of the elongation complexes generated by the HIV-1 promoter were not highly processive and terminated within the first 500 nucleotides. Tat trans-activation of transcription from the HIV-1 promoter resulted from an increase in processive character of the elongation complexes. More specifically, the analysis suggests that there exist two classes of elongation complexes initiating from the HIV promoter: a less-processive form and a more-processive form. Addition of purified Tat protein was found to increase the abundance of the more-processive class of elongation complex. The purine nucleoside analog, 5,6-dichloro-1-beta-D-ribofuranosylbenzimidazole (DRB) inhibits transcription in this reaction by decreasing the efficiency of elongation. Surprisingly, stimulation of transcription elongation by Tat was preferentially inhibited by the addition of DRB. Images PMID:1756726

  18. Mapping on complex neutrosophic soft expert sets

    NASA Astrophysics Data System (ADS)

    Al-Quran, Ashraf; Hassan, Nasruddin

    2018-04-01

    We introduce the mapping on complex neutrosophic soft expert sets. Further, we investigated the basic operations and other related properties of complex neutrosophic soft expert image and complex neutrosophic soft expert inverse image of complex neutrosophic soft expert sets.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  20. The neural basis of precise visual short-term memory for complex recognisable objects.

    PubMed

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Methods for the visualization and analysis of extracellular matrix protein structure and degradation.

    PubMed

    Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon

    2018-01-01

    This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.

  2. Analysis of the development of missile-borne IR imaging detecting technologies

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Wang, Feng

    2017-10-01

    Today's infrared imaging guiding missiles are facing many challenges. With the development of targets' stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key technologies and its development trends of missiles' IR imaging detecting technologies are analyzed.

  3. Imaging agents for in vivo magnetic resonance and scintigraphic imaging

    DOEpatents

    Engelstad, Barry L.; Raymond, Kenneth N.; Huberty, John P.; White, David L.

    1991-01-01

    Methods are provided for in vivo magnetic resonance imaging and/or scintigraphic imaging of a subject using chelated transition metal and lanthanide metal complexes. Novel ligands for these complexes are provided.

  4. Characterization of Homopolymer and Polymer Blend Films by Phase Sensitive Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Ngwa, Wilfred; Wannemacher, Reinhold; Grill, Wolfgang

    2003-03-01

    CHARACTERIZATION OF HOMOPOLYMER AND POLYMER BLEND FILMS BY PHASE SENSITIVE ACOUSTIC MICROSCOPY W Ngwa, R Wannemacher, W Grill Institute of Experimental Physics II, University of Leipzig, 04103 Leipzig, Germany Abstract We have used phase sensitive acoustic microscopy (PSAM) to study homopolymer thin films of polystyrene (PS) and poly (methyl methacrylate) (PMMA), as well as PS/PMMA blend films. We show from our results that PSAM can be used as a complementary and highly valuable technique for elucidating the three-dimensional (3D) morphology and micromechanical properties of thin films. Three-dimensional image acquisition with vector contrast provides the basis for: complex V(z) analysis (per image pixel), 3D image processing, height profiling, and subsurface image analysis of the polymer films. Results show good agreement with previous studies. In addition, important new information on the three dimensional structure and properties of polymer films is obtained. Homopolymer film structure analysis reveals (pseudo-) dewetting by retraction of droplets, resulting in a morphology that can serve as a starting point for the analysis of polymer blend thin films. The outcome of confocal laser scanning microscopy studies, performed on the same samples are correlated with the obtained results. Advantages and limitations of PSAM are discussed.

  5. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  6. Efficient Data Mining for Local Binary Pattern in Texture Image Analysis

    PubMed Central

    Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.

    2015-01-01

    Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332

  7. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  8. Analysis of leaf surfaces using scanning ion conductance microscopy.

    PubMed

    Walker, Shaun C; Allen, Stephanie; Bell, Gordon; Roberts, Clive J

    2015-05-01

    Leaf surfaces are highly complex functional systems with well defined chemistry and structure dictating the barrier and transport properties of the leaf cuticle. It is a significant imaging challenge to analyse the very thin and often complex wax-like leaf cuticle morphology in their natural state. Scanning electron microscopy (SEM) and to a lesser extent Atomic force microscopy are techniques that have been used to study the leaf surface but their remains information that is difficult to obtain via these approaches. SEM is able to produce highly detailed and high-resolution images needed to study leaf structures at the submicron level. It typically operates in a vacuum or low pressure environment and as a consequence is generally unable to deal with the in situ analysis of dynamic surface events at submicron scales. Atomic force microscopy also possess the high-resolution imaging required and can follow dynamic events in ambient and liquid environments, but can over exaggerate small features and cannot image most leaf surfaces due to their inherent roughness at the micron scale. Scanning ion conductance microscopy (SICM), which operates in a liquid environment, provides a potential complementary analytical approach able to address these issues and which is yet to be explored for studying leaf surfaces. Here we illustrate the potential of SICM on various leaf surfaces and compare the data to SEM and atomic force microscopy images on the same samples. In achieving successful imaging we also show that SICM can be used to study the wetting of hydrophobic surfaces in situ. This has potentially wider implications than the study of leaves alone as surface wetting phenomena are important in a range of fundamental and applied studies. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  9. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    PubMed

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  10. MR Imaging of the Triangular Fibrocartilage Complex.

    PubMed

    Cody, Michael E; Nakamura, David T; Small, Kirstin M; Yoshioka, Hiroshi

    2015-08-01

    MR imaging has emerged as the mainstay in imaging internal derangement of the soft tissues of the musculoskeletal system largely because of superior contrast resolution. The complex geometry and diminutive size of the triangular fibrocartilage complex (TFCC) and its constituent structures can make optimal imaging of the TFCC challenging; therefore, production of clinically useful images requires careful optimization of image acquisition parameters. This article provides a foundation for advanced TFCC imaging including factors to optimize magnetic resonance images, arthrography, detailed anatomy, and classification of injury. In addition, clinical presentations and treatments for TFCC injury are briefly considered. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Microvessel prediction in H&E Stained Pathology Images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Yang, Lin; Wang, Shidan; Guo, Lei; Huang, Chenglong; Xie, Yang; Xiao, Guanghua

    2018-02-27

    Pathological angiogenesis has been identified in many malignancies as a potential prognostic factor and target for therapy. In most cases, angiogenic analysis is based on the measurement of microvessel density (MVD) detected by immunostaining of CD31 or CD34. However, most retrievable public data is generally composed of Hematoxylin and Eosin (H&E)-stained pathology images, for which is difficult to get the corresponding immunohistochemistry images. The role of microvessels in H&E stained images has not been widely studied due to their complexity and heterogeneity. Furthermore, identifying microvessels manually for study is a labor-intensive task for pathologists, with high inter- and intra-observer variation. Therefore, it is important to develop automated microvessel-detection algorithms in H&E stained pathology images for clinical association analysis. In this paper, we propose a microvessel prediction method using fully convolutional neural networks. The feasibility of our proposed algorithm is demonstrated through experimental results on H&E stained images. Furthermore, the identified microvessel features were significantly associated with the patient clinical outcomes. This is the first study to develop an algorithm for automated microvessel detection in H&E stained pathology images.

  12. Advances in basic science methodologies for clinical diagnosis in female stress urinary incontinence.

    PubMed

    Abdulaziz, Marwa; Deegan, Emily G; Kavanagh, Alex; Stothers, Lynn; Pugash, Denise; Macnab, Andrew

    2017-06-01

    We provide an overview of advanced imaging techniques currently being explored to gain greater understanding of the complexity of stress urinary incontinence (SUI) through better definition of structural anatomic data. Two methods of imaging and analysis are detailed for SUI with or without prolapse: 1) open magnetic resonance imaging (MRI) with or without the use of reference lines; and 2) 3D reconstruction of the pelvis using MRI. An additional innovative method of assessment includes the use of near infrared spectroscopy (NIRS), which uses non-invasive photonics in a vaginal speculum to objectively evaluate pelvic floor muscle (PFM) function as it relates to SUI pathology. Advantages and disadvantages of these techniques are described. The recent innovation of open-configuration magnetic resonance imaging (MRO) allows images to be captured in sitting and standing positions, which better simulates states that correlate with urinary leakage and can be further enhanced with 3D reconstruction. By detecting direct changes in oxygenated muscle tissue, the NIRS vaginal speculum is able to provide insight into how the oxidative capacity of the PFM influences SUI. The small number of units able to provide patient evaluation using these techniques and their cost and relative complexity are major considerations, but if such imaging can optimize diagnosis, treatment allocation, and selection for surgery enhanced imaging techniques may prove to be a worthwhile and cost-effective strategy for assessing and treating SUI.

  13. Imaging agents for in vivo magnetic resonance and scintigraphic imaging

    DOEpatents

    Engelstad, B.L.; Raymond, K.N.; Huberty, J.P.; White, D.L.

    1991-04-23

    Methods are provided for in vivo magnetic resonance imaging and/or scintigraphic imaging of a subject using chelated transition metal and lanthanide metal complexes. Novel ligands for these complexes are provided. No Drawings

  14. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  15. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  16. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. 1,2-Hydroxypyridonates as Contrast Agents for Magnetic ResonanceImaging: TREN-1,2-HOPO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jocher, Christoph J.; Moore, Evan G.; Xu, Jide

    2007-05-08

    1,2-Hydroxypyridinones (1,2-HOPO) form very stable lanthanide complexes that may be useful as contrast agents for Magnetic Resonance Imaging (MRI). X-ray diffraction of single crystals established that the solid state structures of the Eu(III) and the previously reported [Inorg. Chem. 2004, 43, 5452] Gd(III) complex are identical. The recently discovered sensitizing properties of 1,2-HOPO chelates for Eu(III) luminescence allow direct measurement of the number if water molecules in the metal complex. Fluorescence measurements of the Eu(III) complex corroborate that in solution two water molecules coordinate the lanthanide (q = 2) as proposed from the analysis of NMRD profiles. In addition, fluorescencemore » measurements have verified the anion binding interactions of lanthanide TREN-1,2-HOPO complexes in solution, studied by relaxivity, revealing only very weak oxalate binding (K{sub A} = 82.7 {+-} 6.5 M{sup -1}). Solution thermodynamic studies of the metal complex and free ligand have been carried out using potentiometry, spectrophotometry and fluorescence spectroscopy. The metal ion selectivity of TREN-1,2-HOPO supports the feasibility of using 1,2-HOPO ligands for selective lanthanide binding [pGd = 19.3 (2); pZn = 15.2 (2), pCa = 8.8 (3)].« less

  18. Focal Cortical Dysplasia (FCD) lesion analysis with complex diffusion approach.

    PubMed

    Rajan, Jeny; Kannan, K; Kesavadas, C; Thomas, Bejoy

    2009-10-01

    Identification of Focal Cortical Dysplasia (FCD) can be difficult due to the subtle MRI changes. Though sequences like FLAIR (fluid attenuated inversion recovery) can detect a large majority of these lesions, there are smaller lesions without signal changes that can easily go unnoticed by the naked eye. The aim of this study is to improve the visibility of focal cortical dysplasia lesions in the T1 weighted brain MRI images. In the proposed method, we used a complex diffusion based approach for calculating the FCD affected areas. Based on the diffused image and thickness map, a complex map is created. From this complex map; FCD areas can be easily identified. MRI brains of 48 subjects selected by neuroradiologists were given to computer scientists who developed the complex map for identifying the cortical dysplasia. The scientists were blinded to the MRI interpretation result of the neuroradiologist. The FCD could be identified in all the patients in whom surgery was done, however three patients had false positive lesions. More lesions were identified in patients in whom surgery was not performed and lesions were seen in few of the controls. These were considered as false positive. This computer aided detection technique using complex diffusion approach can help detect focal cortical dysplasia in patients with epilepsy.

  19. Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis.

    PubMed

    Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil

    2017-02-01

    Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Spectral imaging: principles and applications.

    PubMed

    Garini, Yuval; Young, Ian T; McNamara, George

    2006-08-01

    Spectral imaging extends the capabilities of biological and clinical studies to simultaneously study multiple features such as organelles and proteins qualitatively and quantitatively. Spectral imaging combines two well-known scientific methodologies, namely spectroscopy and imaging, to provide a new advantageous tool. The need to measure the spectrum at each point of the image requires combining dispersive optics with the more common imaging equipment, and introduces constrains as well. The principles of spectral imaging and a few representative applications are described. Spectral imaging analysis is necessary because the complex data structure cannot be analyzed visually. A few of the algorithms are discussed with emphasis on the usage for different experimental modes (fluorescence and bright field). Finally, spectral imaging, like any method, should be evaluated in light of its advantages to specific applications, a selection of which is described. Spectral imaging is a relatively new technique and its full potential is yet to be exploited. Nevertheless, several applications have already shown its potential. (c) 2006 International Society for Analytical Cytology.

  1. Hyperspectral small animal fluorescence imaging: spectral selection imaging

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Hall, Heidi; Vizard, Douglas; Robinson, J. Paul

    2008-02-01

    Molecular imaging is a rapidly growing area of research, fueled by needs in pharmaceutical drug-development for methods for high-throughput screening, pre-clinical and clinical screening for visualizing tumor growth and drug targeting, and a growing number of applications in the molecular biology fields. Small animal fluorescence imaging employs fluorescent probes to target molecular events in vivo, with a large number of molecular targeting probes readily available. The ease at which new targeting compounds can be developed, the short acquisition times, and the low cost (compared to microCT, MRI, or PET) makes fluorescence imaging attractive. However, small animal fluorescence imaging suffers from high optical scattering, absorption, and autofluorescence. Much of these problems can be overcome through multispectral imaging techniques, which collect images at different fluorescence emission wavelengths, followed by analysis, classification, and spectral deconvolution methods to isolate signals from fluorescence emission. We present an alternative to the current method, using hyperspectral excitation scanning (spectral selection imaging), a technique that allows excitation at any wavelength in the visible and near-infrared wavelength range. In many cases, excitation imaging may be more effective at identifying specific fluorescence signals because of the higher complexity of the fluorophore excitation spectrum. Because the excitation is filtered and not the emission, the resolution limit and image shift imposed by acousto-optic tunable filters have no effect on imager performance. We will discuss design of the imager, optimizing the imager for use in small animal fluorescence imaging, and application of spectral analysis and classification methods for identifying specific fluorescence signals.

  2. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics.

    PubMed

    Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter

    2012-10-04

    Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.

  3. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    PubMed Central

    2012-01-01

    Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717

  4. Design and analysis of x-ray vision systems for high-speed detection of foreign body contamination in food

    NASA Astrophysics Data System (ADS)

    Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.

    1994-10-01

    In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.

  5. Terahertz imaging applied to cancer diagnosis.

    PubMed

    Brun, M-A; Formanek, F; Yasuda, A; Sekine, M; Ando, N; Eishii, Y

    2010-08-21

    We report on terahertz (THz) time-domain spectroscopy imaging of 10 microm thick histological sections. The sections are prepared according to standard pathological procedures and deposited on a quartz window for measurements in reflection geometry. Simultaneous acquisition of visible images enables registration of THz images and thus the use of digital pathology tools to investigate the links between the underlying cellular structure and specific THz information. An analytic model taking into account the polarization of the THz beam, its incidence angle, the beam shift between the reference and sample pulses as well as multiple reflections within the sample is employed to determine the frequency-dependent complex refractive index. Spectral images are produced through segmentation of the extracted refractive index data using clustering methods. Comparisons of visible and THz images demonstrate spectral differences not only between tumor and healthy tissues but also within tumors. Further visualization using principal component analysis suggests different mechanisms as to the origin of image contrast.

  6. An image understanding system using attributed symbolic representation and inexact graph-matching

    NASA Astrophysics Data System (ADS)

    Eshera, M. A.; Fu, K.-S.

    1986-09-01

    A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.

  7. Wide-field surface plasmon microscopy of nano- and microparticles: features, benchmarking, limitations, and bioanalytical applications

    NASA Astrophysics Data System (ADS)

    Nizamov, Shavkat; Scherbahn, Vitali; Mirsky, Vladimir M.

    2017-05-01

    Detection of nano- and micro-particles is an important task for chemical analytics, food industry, biotechnology, environmental monitoring and many other fields of science and industry. For this purpose, a method based on the detection and analysis of minute signals in surface plasmon resonance images due to adsorption of single nanopartciles was developed. This new technology allows one a real-time detection of interaction of single nano- and micro-particles with sensor surface. Adsorption of each nanoparticle leads to characteristic diffraction image whose intensity depends on the size and chemical composition of the particle. The adsorption rate characterizes volume concentration of nano- and micro-particles. Large monitored surface area of sensor enables a high dynamic range of counting and to a correspondingly high dynamic range in concentration scale. Depending on the type of particles and experimental conditions, the detection limit for aqueous samples can be below 1000 particles per microliter. For application of method in complex media, nanoparticle images are discriminated from image perturbations due to matrix components. First, the characteristic SPRM images of nanoparticles (templates) are collected in aqueous suspensions or spiked real samples. Then, the detection of nanoparticles in complex media using template matching is performed. The detection of various NPs in consumer products like cosmetics, mineral water, juices, and wines was shown at sub-ppb level. The method can be applied for ultrasensitive detection and analysis of nano- and micro-particles of biological (bacteria, viruses, endosomes), biotechnological (liposomes, protein nanoparticles for drug delivery) or technical origin.

  8. Multiplexed 3D FRET imaging in deep tissue of live embryos

    PubMed Central

    Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei

    2015-01-01

    Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920

  9. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  10. High frame-rate computational ghost imaging system using an optical fiber phased array and a low-pixel APD array.

    PubMed

    Liu, Chunbo; Chen, Jingqiu; Liu, Jiaxin; Han, Xiang'e

    2018-04-16

    To obtain a high imaging frame rate, a computational ghost imaging system scheme is proposed based on optical fiber phased array (OFPA). Through high-speed electro-optic modulators, the randomly modulated OFPA can provide much faster speckle projection, which can be precomputed according to the geometry of the fiber array and the known phases for modulation. Receiving the signal light with a low-pixel APD array can effectively decrease the requirement on sampling quantity and computation complexity owing to the reduced data dimensionality while avoiding the image aliasing due to the spatial periodicity of the speckles. The results of analysis and simulation show that the frame rate of the proposed imaging system can be significantly improved compared with traditional systems.

  11. The organization of LH2 complexes in membranes from Rhodobacter sphaeroides.

    PubMed

    Olsen, John D; Tucker, Jaimey D; Timney, John A; Qian, Pu; Vassilev, Cvetelin; Hunter, C Neil

    2008-11-07

    The mapping of the photosynthetic membrane of Rhodobacter sphaeroides by atomic force microscopy (AFM) revealed a unique organization of arrays of dimeric reaction center-light harvesting I-PufX (RC-LH1-PufX) core complexes surrounded and interconnected by light-harvesting LH2 complexes (Bahatyrova, S., Frese, R. N., Siebert, C. A., Olsen, J. D., van der Werf, K. O., van Grondelle, R., Niederman, R. A., Bullough, P. A., Otto, C., and Hunter, C. N. (2004) Nature 430, 1058-1062). However, membrane regions consisting solely of LH2 complexes were under-represented in these images because these small, highly curved areas of membrane rendered them difficult to image even using gentle tapping mode AFM and impossible with contact mode AFM. We report AFM imaging of membranes prepared from a mutant of R. sphaeroides, DPF2G, that synthesizes only the LH2 complexes, which assembles spherical intracytoplasmic membrane vesicles of approximately 53 nm diameter in vivo. By opening these vesicles and adsorbing them onto mica to form small, < or =120 nm, largely flat sheets we have been able to visualize the organization of these LH2-only membranes for the first time. The transition from highly curved vesicle to the planar sheet is accompanied by a change in the packing of the LH2 complexes such that approximately half of the complexes are raised off the mica surface by approximately 1 nm relative to the rest. This vertical displacement produces a very regular corrugated appearance of the planar membrane sheets. Analysis of the topographs was used to measure the distances and angles between the complexes. These data are used to model the organization of LH2 complexes in the original, curved membrane. The implications of this architecture for the light harvesting function and diffusion of quinones in native membranes of R. sphaeroides are discussed.

  12. Specialized Diagnostic Investigations to Assess Ocular Status in Hypertensive Diseases of Pregnancy.

    PubMed

    Bakhda, Rahul Navinchandra

    2016-04-22

    This review describes specialized diagnostic investigations to assess ocular status in hypertensive diseases of pregnancy. Ocular assessment can aid in early detection for prompt multidisciplinary treatment, obstetric intervention and follow-up. The investigations accurately predict the possible causes of blindness in hypertensive diseases of pregnancy. The investigations include fluorescein angiography, ophthalmodynamometry, fluorophotometry, imaging modalities, OCT, ultrasonography, doppler velocimetry and blood chemistry analysis. The review includes a summary of imaging techniques and related recent developments to assess the neuro-ophthalmic aspects of the disease. The imaging modalities have been instrumental in understanding the complex neuropathophysiological mechanisms of eclamptic seizures. The importance of blood chemistry analysis in hypertensive diseases of pregnancy has been emphasized. The investigations have made a significant contribution in improving the standards of antenatal care and reducing maternal and fetal morbidity and mortality.

  13. Specialized Diagnostic Investigations to Assess Ocular Status in Hypertensive Diseases of Pregnancy

    PubMed Central

    Bakhda, Rahul Navinchandra

    2016-01-01

    This review describes specialized diagnostic investigations to assess ocular status in hypertensive diseases of pregnancy. Ocular assessment can aid in early detection for prompt multidisciplinary treatment, obstetric intervention and follow-up. The investigations accurately predict the possible causes of blindness in hypertensive diseases of pregnancy. The investigations include fluorescein angiography, ophthalmodynamometry, fluorophotometry, imaging modalities, OCT, ultrasonography, doppler velocimetry and blood chemistry analysis. The review includes a summary of imaging techniques and related recent developments to assess the neuro-ophthalmic aspects of the disease. The imaging modalities have been instrumental in understanding the complex neuropathophysiological mechanisms of eclamptic seizures. The importance of blood chemistry analysis in hypertensive diseases of pregnancy has been emphasized. The investigations have made a significant contribution in improving the standards of antenatal care and reducing maternal and fetal morbidity and mortality. PMID:28933399

  14. Quantum red-green-blue image steganography

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh

    One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.

  15. An image overall complexity evaluation method based on LSD line detection

    NASA Astrophysics Data System (ADS)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  16. Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror

    NASA Astrophysics Data System (ADS)

    Harris, L. D.; Camp, J. J.

    1984-10-01

    A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.

  17. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  19. The Research on Dryland Crop Classification Based on the Fusion of SENTINEL-1A SAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.

    2018-04-01

    In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.

  20. In situ investigation of complex BaSO4 fiber generation in the presence of sodium polyacrylate. 1. Kinetics and solution analysis.

    PubMed

    Wang, Tongxin; Cölfen, Helmut

    2006-10-10

    Simple solution analysis of the formation mechanism of complex BaSO(4) fiber bundles in the presence of polyacrylate sodium salt, via a bioinspired approach, is reported. Titration of the polyacrylate solution with Ba(2+) revealed complex formation and the optimum ratio of Ba(2+) to polyacrylate for a slow polymer-controlled mineralization process. This is a much simpler and faster method to determine the appropriate additive/mineral concentration pairs as opposed to more common crystallization experiments in which the additive/mineral concentration is varied. Time-dependent pH measurements were carried out to determine the concentration of solution species from which BaSO(4) supersaturation throughout the fiber formation process can be calculated and the second-order kinetics of the Ba(2+) concentration in solution can be identified. Conductivity measurements, pH measurements, and analytical ultracentrifugation revealed the first formed species to be Ba-polyacrylate complexes. A combination of the solution analysis results and optical microscopic images allows a detailed picture of the complex precipitation and self-organization process, a particle-mediated process involving mesoscopic transformations, to be revealed.

  1. Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks

    PubMed Central

    Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi

    2017-01-01

    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998

  2. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  3. Macromolecular structures probed by combining single-shot free-electron laser diffraction with synchrotron coherent X-ray imaging.

    PubMed

    Gallagher-Jones, Marcus; Bessho, Yoshitaka; Kim, Sunam; Park, Jaehyun; Kim, Sangsoo; Nam, Daewoong; Kim, Chan; Kim, Yoonhee; Noh, Do Young; Miyashita, Osamu; Tama, Florence; Joti, Yasumasa; Kameshima, Takashi; Hatsui, Takaki; Tono, Kensuke; Kohmura, Yoshiki; Yabashi, Makina; Hasnain, S Samar; Ishikawa, Tetsuya; Song, Changyong

    2014-05-02

    Nanostructures formed from biological macromolecular complexes utilizing the self-assembly properties of smaller building blocks such as DNA and RNA hold promise for many applications, including sensing and drug delivery. New tools are required for their structural characterization. Intense, femtosecond X-ray pulses from X-ray free-electron lasers enable single-shot imaging allowing for instantaneous views of nanostructures at ambient temperatures. When combined judiciously with synchrotron X-rays of a complimentary nature, suitable for observing steady-state features, it is possible to perform ab initio structural investigation. Here we demonstrate a successful combination of femtosecond X-ray single-shot diffraction with an X-ray free-electron laser and coherent diffraction imaging with synchrotron X-rays to provide an insight into the nanostructure formation of a biological macromolecular complex: RNA interference microsponges. This newly introduced multimodal analysis with coherent X-rays can be applied to unveil nano-scale structural motifs from functional nanomaterials or biological nanocomplexes, without requiring a priori knowledge.

  4. Cone beam volume tomography: an imaging option for diagnosis of complex mandibular third molar anatomical relationships.

    PubMed

    Danforth, Robert A; Peck, Jerry; Hall, Paul

    2003-11-01

    Complex impacted third molars present potential treatment complications and possible patient morbidity. Objectives of diagnostic imaging are to facilitate diagnosis, decision making, and enhance treatment outcomes. As cases become more complex, advanced multiplane imaging methods allowing for a 3-D view are more likely to meet these objectives than traditional 2-D radiography. Until recently, advanced imaging options were somewhat limited to standard film tomography or medical CT, but development of cone beam volume tomography (CBVT) multiplane 3-D imaging systems specifically for dental use now provides an alternative imaging option. Two cases were utilized to compare the role of CBVT to these other imaging options and to illustrate how multiplane visualization can assist the pretreatment evaluation and decision-making process for complex impacted mandibular third molar cases.

  5. Death and Its Rituals in Novels on AIDS.

    ERIC Educational Resources Information Center

    Levy, Joseph J.; Nouss, Alexis

    1993-01-01

    Reviews novels dealing with the Acquired Immune Deficiency Syndrome (AIDS) epidemic, noting that their perspectives on death can be extracted through content analysis. Concludes that, overall, these novels present weak symbolization about death with rituals that are not highly elaborated and that complex images of the afterlife are not offered.…

  6. Quantitative phase and texture angularity analysis of brain white matter lesions in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Baxandall, Shalese; Sharma, Shrushrita; Zhai, Peng; Pridham, Glen; Zhang, Yunyan

    2018-03-01

    Structural changes to nerve fiber tracts are extremely common in neurological diseases such as multiple sclerosis (MS). Accurate quantification is vital. However, while nerve fiber damage is often seen as multi-focal lesions in magnetic resonance imaging (MRI), measurement through visual perception is limited. Our goal was to characterize the texture pattern of the lesions in MRI and determine how texture orientation metrics relate to lesion structure using two new methods: phase congruency and multi-resolution spatial-frequency analysis. The former aims to optimize the detection of the `edges and corners' of a structure, and the latter evaluates both the radial and angular distributions of image texture associated with the various forming scales of a structure. The radial texture spectra were previously confirmed to measure the severity of nerve fiber damage, and were thus included for validation. All measures were also done in the control brain white matter for comparison. Using clinical images of MS patients, we found that both phase congruency and weighted mean phase detected invisible lesion patterns and were significantly greater in lesions, suggesting higher structure complexity, than the control tissue. Similarly, multi-angular spatial-frequency analysis detected much higher texture across the whole frequency spectrum in lesions than the control areas. Such angular complexity was consistent with findings from radial texture. Analysis of the phase and texture alignment may prove to be a useful new approach for assessing invisible changes in lesions using clinical MRI and thereby lead to improved management of patients with MS and similar disorders.

  7. Satellite on-board real-time SAR processor prototype

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.

  8. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    PubMed Central

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  9. 3D time-lapse analysis of Rab11/FIP5 complex: spatiotemporal dynamics during apical lumen formation.

    PubMed

    Mangan, Anthony; Prekeris, Rytis

    2015-01-01

    Fluorescent imaging of fixed cells grown in two-dimensional (2D) cultures is one of the most widely used techniques for observing protein localization and distribution within cells. Although this technique can also be applied to polarized epithelial cells that form three-dimensional (3D) cysts when grown in a Matrigel matrix suspension, there are still significant limitations in imaging cells fixed at a particular point in time. Here, we describe the use of 3D time-lapse imaging of live cells to observe the dynamics of apical membrane initiation site (AMIS) formation and lumen expansion in polarized epithelial cells.

  10. Comparative studies of mononuclear Ni(II) and UO2(II) complexes having bifunctional coordinated groups: Synthesis, thermal analysis, X-ray diffraction, surface morphology studies and biological evaluation

    NASA Astrophysics Data System (ADS)

    Fahem, Abeer A.

    2012-03-01

    Two Schiff base ligands derived from condensation of phthalaldehyde and o-phenylenediamine in 1:2 (L1) and 2:1 (L2) having bifunctional coordinated groups (NH2 and CHO groups, respectively) and their metal complexes with Ni(II) and UO2(II) have been synthesized and characterized by elemental analysis, molar conductance, magnetic susceptibilities and spectral data (IR, 1H NMR, mass and solid reflectance) as well as thermal, XRPD and SEM analysis. The formula [Ni(L1)Cl2]·2.5H2O, [UO2(L1)(NO3)2]·2H2O, [Ni(L2)Cl2]·1.5H2O and [UO2(L2)(NO3)2] have been suggested for the complexes. The vibrational spectral data show that the ligands behave as neutral ligands and coordinated to the metal ions in a tetradentate manner. The Ni(II) complexes are six coordinate with octahedral geometry and the ligand field parameters: Dq, B, β and LFSE were calculated while, UO2(II) complexes are eight coordinate with dodecahedral geometry and the force constant, FUsbnd O and bond length, RUsbnd O were calculated. The thermal decomposition of complexes ended with metal chloride/nitrate as a final product and the highest thermal stability is displayed by [UO2(L2)(NO3)2] complex. The X-ray powder diffraction data revealed the formation of nano sized crystalline complexes. The SEM analysis provides the morphology of the synthesized compounds and SEM image of [UO2(L2)(NO3)2] complex exhibits nano rod structure. The growth-inhibiting potential of the ligands and their complexes has been assessed against a variety of bacterial and fungal strains.

  11. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    PubMed

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  12. The Effect of Multispectral Image Fusion Enhancement on Human Efficiency

    DTIC Science & Technology

    2017-03-20

    human visual system by applying a technique commonly used in visual percep- tion research : ideal observer analysis. Using this approach, we establish...applications, analytic tech- niques, and procedural methods used across studies. This paper uses ideal observer analysis to establish a frame- work that allows...augmented similarly to incorpo- rate research involving more complex stimulus content. Additionally, the ideal observer can be adapted for a number of

  13. Detection of M. tuberculosis using DNA chips combined with an image analysis system.

    PubMed

    Huang, T-S; Liu, Y-C; Bair, C-H; Sy, C-L; Chen, Y-S; Tu, H-Z; Chen, B-C

    2008-01-01

    To develop a packaged DNA chip assay (the DR. MTBC Screen assay) for direct detection of the Mycobacterium tuberculosis complex. We described a DNA chip assay based on the IS6110 gene that can be used for the detection of M. tuberculosis complex. Probes were spotted onto the polystyrene strips in the wells of 96-well microtitre plates and used for hybridisation with biotin-labelled amplicon to yield a pattern of visualised positive spots. The plate image was scanned, analysed and interpreted automatically. The results corresponded well with those obtained by conventional culture as well as clinical diagnosis, with sensitivity and specificity rates of respectively 83.8% and 94.2%, and 84.6% and 96.3%. We conclude that the DR. MTBC Screen assay can detect M. tuberculosis complex rapidly in respiratory specimens, readily adapts to routine work and provides a flexible choice to meet different cost-effectiveness and automation needs in TB-endemic countries. The cost for reagents is around US$10 per sample.

  14. Increasing lanthanide luminescence by use of the RETEL effect.

    PubMed

    Leif, Robert C; Vallarino, Lidia M; Becker, Margie C; Yang, Sean

    2006-08-01

    Luminescent lanthanide complexes produce emissions with the narrowest-known width at half maximum; however, their significant use in cytometry required an increase in luminescence intensity. The companion review, Leif et al., Cytometry 2006;69A:767-778, described a new technique for the enhancement of lanthanide luminescence, the Resonance Energy Transfer Enhanced Luminescence (RETEL) effect, which increases luminescence and is compatible with standard slide microscopy. The luminescence of the europium ion macrocyclic complex, EuMac, was increased by employing the RETEL effect. After adding the nonluminescent gadolinium ion complex of the thenoyltrifluoroacetonate (TTFA) ligand or the sodium salt of TTFA in ethanol solution, the EuMac-labeled sample was allowed to dry. Both a conventional arc lamp and a time-gated UV LED served as light sources for microscopic imaging. The emission intensity was measured with a CCD camera. Multiple time-gated images were summed with special software to permit analysis and effective presentation of the final image. With the RETEL effect, the luminescence of the EuMac-streptavidin conjugate increased at least six-fold upon drying. Nuclei of apoptotic cells were stained with DAPI and tailed with 5BrdUrd to which a EuMac-anti-5BrdU conjugate was subsequently attached. Time-gated images showed the long-lived EuMac luminescence but did not show the short-lived DAPI fluorescence. Imaging of DNA-synthesizing cells with an arc lamp showed that both S phase and apoptotic cells were labeled, and that their labeling patterns were different. The images of the luminescent EuMac and fluorescent DAPI were combined to produce a color image on a white background. This combination of simple chemistry, instrumentation, and presentation should make possible the inexpensive use of the lanthanide macrocycles, Quantum Dyes, as molecular diagnostics for cytological and histopathological microscopic imaging. (c) 2006 International Society for Analytical Cytology.

  15. Fine-grained recognition of plants from images.

    PubMed

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  16. Anterior Chamber Angle Shape Analysis and Classification of Glaucoma in SS-OCT Images.

    PubMed

    Ni Ni, Soe; Tian, J; Marziliano, Pina; Wong, Hong-Tym

    2014-01-01

    Optical coherence tomography is a high resolution, rapid, and noninvasive diagnostic tool for angle closure glaucoma. In this paper, we present a new strategy for the classification of the angle closure glaucoma using morphological shape analysis of the iridocorneal angle. The angle structure configuration is quantified by the following six features: (1) mean of the continuous measurement of the angle opening distance; (2) area of the trapezoidal profile of the iridocorneal angle centered at Schwalbe's line; (3) mean of the iris curvature from the extracted iris image; (4) complex shape descriptor, fractal dimension, to quantify the complexity, or changes of iridocorneal angle; (5) ellipticity moment shape descriptor; and (6) triangularity moment shape descriptor. Then, the fuzzy k nearest neighbor (fkNN) classifier is utilized for classification of angle closure glaucoma. Two hundred and sixty-four swept source optical coherence tomography (SS-OCT) images from 148 patients were analyzed in this study. From the experimental results, the fkNN reveals the best classification accuracy (99.11 ± 0.76%) and AUC (0.98 ± 0.012) with the combination of fractal dimension and biometric parameters. It showed that the proposed approach has promising potential to become a computer aided diagnostic tool for angle closure glaucoma (ACG) disease.

  17. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  18. Magnetic resonance imaging features of complex Chiari malformation variant of Chiari 1 malformation.

    PubMed

    Moore, Hannah E; Moore, Kevin R

    2014-11-01

    Complex Chiari malformation is a subgroup of Chiari 1 malformation with distinct imaging features. Children with complex Chiari malformation are reported to have a more severe clinical phenotype and sometimes require more extensive surgical treatment than those with uncomplicated Chiari 1 malformation. We describe reported MR imaging features of complex Chiari malformation and evaluate the utility of craniometric parameters and qualitative anatomical observations for distinguishing complex Chiari malformation from uncomplicated Chiari 1 malformation. We conducted a retrospective search of the institutional imaging database using the keywords "Chiari" and "Chiari 1" to identify children imaged during the 2006-2011 time period. Children with Chiari 2 malformation were excluded after imaging review. We used the first available diagnostic brain or cervical spine MR study for data measurement. Standard measurements and observations were made of obex level (mm), cerebellar tonsillar descent (mm), perpendicular distance to basion-C2 line (pB-C2, mm), craniocervical angle (degrees), clivus length, and presence or absence of syringohydromyelia, basilar invagination and congenital craniovertebral junction osseous anomalies. After imaging review, we accessed the institutional health care clinical database to determine whether each subject clinically met criteria for Chiari 1 malformation or complex Chiari malformation. Obex level and craniocervical angle measurements showed statistically significant differences between the populations with complex Chiari malformation and uncomplicated Chiari 1 malformation. Cerebellar tonsillar descent and perpendicular distance to basion-C2 line measurements trended toward but did not meet statistical significance. Odontoid retroflexion, craniovertebral junction osseous anomalies, and syringohydromyelia were all observed proportionally more often in children with complex Chiari malformation than in those with Chiari 1 malformation. Characteristic imaging features of complex Chiari malformation, especially obex level, permit its distinction from the more common uncomplicated Chiari 1 malformation.

  19. A Hitchhiker's Guide to Functional Magnetic Resonance Imaging

    PubMed Central

    Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno

    2016-01-01

    Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073

  20. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    NASA Astrophysics Data System (ADS)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  1. Adaptive nonlinear L2 and L3 filters for speckled image processing

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.

    1997-04-01

    Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.

  2. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    NASA Astrophysics Data System (ADS)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  3. Imaging Practice Patterns: Referral Network Analysis of a Single State of Origination.

    PubMed

    Grayson, James; Basciano, Peter; Rawson, James V; Klein, Kandace

    2015-12-01

    The aim of this study was to examine the referral pattern of imaging studies requested in a single state compared with the potential location of interpretation. Analysis of Medicare patients in a DocGraph data set was performed to identify sequential different physician services claims for the same patient for which the second claim was for services provided by a radiologist. In the 2011 Medicare population, radiology referrals from physicians practicing in Georgia resulted in 76.5% of radiology interpretations by radiologists inside the state of Georgia. The states bordering Georgia accounted for 11.6% of interpretations in the Georgia market. The remaining interpretations were distributed throughout the remainder of the country. A significant proportion of routine imaging interpretation occurs outside the state in which an examination is performed. Additional studies are needed to identify complex drivers of imaging referral patterns, such as patient geographic location and demographics, radiologist workforce distribution, contractual obligations, and social relationships. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. A novel structure-aware sparse learning algorithm for brain imaging genetics.

    PubMed

    Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2014-01-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.

  5. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    PubMed

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  6. A combined method for correlative 3D imaging of biological samples from macro to nano scale

    NASA Astrophysics Data System (ADS)

    Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Antonopoulos, Georgios C.; Knudsen, Lars; Wrede, Christoph; Izykowski, Nicole; Grothausmann, Roman; Jonigk, Danny; Ochs, Matthias; Ripken, Tammo; Kühnel, Mark P.; Meyer, Heiko

    2016-10-01

    Correlative analysis requires examination of a specimen from macro to nano scale as well as applicability of analytical methods ranging from morphological to molecular. Accomplishing this with one and the same sample is laborious at best, due to deformation and biodegradation during measurements or intermediary preparation steps. Furthermore, data alignment using differing imaging techniques turns out to be a complex task, which considerably complicates the interconnection of results. We present correlative imaging of the accessory rat lung lobe by combining a modified Scanning Laser Optical Tomography (SLOT) setup with a specially developed sample preparation method (CRISTAL). CRISTAL is a resin-based embedding method that optically clears the specimen while allowing sectioning and preventing degradation. We applied and correlated SLOT with Multi Photon Microscopy, histological and immunofluorescence analysis as well as Transmission Electron Microscopy, all in the same sample. Thus, combining CRISTAL with SLOT enables the correlative utilization of a vast variety of imaging techniques.

  7. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  8. Bispectral infrared forest fire detection and analysis using classification techniques

    NASA Astrophysics Data System (ADS)

    Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando

    2004-01-01

    Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.

  9. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  10. Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steven J.

    2004-01-01

    A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.

  11. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  12. A multiple-point geostatistical approach to quantifying uncertainty for flow and transport simulation in geologically complex environments

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.

    2011-12-01

    In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.

  13. On the Concept Image of Complex Numbers

    ERIC Educational Resources Information Center

    Nordlander, Maria Cortas; Nordlander, Edvard

    2012-01-01

    A study of how Swedish students understand the concept of complex numbers was performed. A questionnaire was issued reflecting the student view of own perception. Obtained answers show a variety of concept images describing how students adopt the concept of complex numbers. These concept images are classified into four categories in order to…

  14. On-road anomaly detection by multimodal sensor analysis and multimedia processing

    NASA Astrophysics Data System (ADS)

    Orhan, Fatih; Eren, P. E.

    2014-03-01

    The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.

  15. Visualizing blood vessel trees in three dimensions: clinical applications

    NASA Astrophysics Data System (ADS)

    Bullitt, Elizabeth; Aylward, Stephen

    2005-04-01

    A connected network of blood vessels surrounds and permeates almost every organ of the human body. The ability to define detailed blood vessel trees enables a variety of clinical applications. This paper discusses four such applications and some of the visualization challenges inherent to each. Guidance of endovascular surgery: 3D vessel trees offer important information unavailable by traditional x-ray projection views. How best to combine the 2- and 3D image information is unknown. Planning/guidance of tumor surgery: During tumor resection it is critical to know which blood vessels can be interrupted safely and which cannot. Providing efficient, clear information to the surgeon together with measures of uncertainty in both segmentation and registration can be a complex problem. Vessel-based registration: Vessel-based registration allows pre-and intraoperative images to be registered rapidly. The approach both provides a potential solution to a difficult clinical dilemma and offers a variety of visualization opportunities. Diagnosis/staging of disease: Almost every disease affects blood vessel morphology. The statistical analysis of vessel shape may thus prove to be an important tool in the noninvasive analysis of disease. A plethora of information is available that must be presented meaningfully to the clinician. As medical image analysis methods increase in sophistication, an increasing amount of useful information of varying types will become available to the clinician. New methods must be developed to present a potentially bewildering amount of complex data to individuals who are often accustomed to viewing only tissue slices or flat projection views.

  16. Apparent Brecciation Gradient, Mount Desert Island, Maine

    NASA Astrophysics Data System (ADS)

    Hawkins, A. T.; Johnson, S. E.

    2004-05-01

    Mount Desert Island, Maine, comprises a shallow level, Siluro-Devonian igneous complex surrounded by a distinctive breccia zone ("shatter zone" of Gilman and Chapman, 1988). The zone is very well exposed on the southern and eastern shores of the island and provides a unique opportunity to examine subvolcanic processes. The breccia of the Shatter Zone shows wide variation in percent matrix and clast, and may represent a spatial and temporal gradient in breccia formation due to a single eruptive or other catastrophic volcanic event. The shatter zone was divided into five developmental stages based on the extent of brecciation: Bar Harbor Formation, Sols Cliffs breccia, Seeley Road breccia, Dubois breccia, and Great Head breccia. A digital camera was employed to capture scale images of representative outcrops using a 0.5 m square Plexiglas frame. Individual images were joined in Adobe Photoshop to create a composite image of each outcrop. The composite photo was then exported to Adobe Illustrator, which was used to outline the clasts and produce a digital map of the outcrop for analysis. The fractal dimension (Fd) of each clast was calculated using NIH Image and a Euclidean distance mapping method described by Bérubé and Jébrak (1999) to quantify the morphology of the fragments, or the complexity of the outline. The more complex the fragment outline, the higher the fractal dimension, indicating that the fragment is less "mature" or has had less exposure to erosional processes, such as the injection of an igneous matrix. Sols Cliffs breccia has an average Fd of 1.125, whereas Great Head breccia has an average Fd of 1.040, with the stages between having intermediate values. The more complex clasts of the Sols Cliffs breccia with a small amount (26.38%) of matrix material suggests that it is the first stage in a sequence of brecciation ending at the more mature, matrix-supported (71.37%) breccia of Great Head. The results of this study will be used to guide isotopic and geochemical analysis of the matrix igneous material in the attempt to better understand the dynamic processes that occur in subvolcanic environments and the mechanisms involved in breccia formation.

  17. In-vivo and in-situ detection of atherosclerotic plaques using full-range complex-conjugate-free spectral domain optical coherence tomography in the murine carotid

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Wicks, Robert; Zhang, Kang; Zhao, Mingtao; Tyler, Betty M.; Hwang, Lee; Pradilla, Gustavo; Kang, Jin U.

    2013-03-01

    Carotid endarterectomy is a common vascular surgical procedure which may help prevent patients' risk of having a stroke. A high resolution real-time imaging technique that can detect the position and size of vascular plaques would provide great value to reduce the risk level and increase the surgical outcome. Optical coherence tomography (OCT), as a high resolution high speed noninvasive imaging technique, was evaluated in this study. Twenty-four 24-week old apolipoprotein E-deficient (ApoE-/-) mice were divided into three groups with 8 in each. One served as the control group fed with normal diet. One served as the study group fed with high-fat diet to induce atherosclerosis. The last served as the treatment group fed with both high-fat diet and medicine to treat atherosclerosis. Full-range, complex-conjugate-free spectral-domain OCT was used to image the mouse aorta near the neck area in-vivo with aorta exposed to the imaging head through surgical procedure. 2D and 3D images of the area of interest were presented real-time through graphics processing unit accelerated algorithm. In-situ imaging of all the mice after perfusion were performed again to validate the invivo detection result and to show potential capability of OCT if combined with surgical saline flush. Later all the imaged arteries were stained with H and E to perform histology analysis. Preliminary results confirmed the accuracy and fast imaging speed of OCT imaging technique in determining atherosclerosis.

  18. Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    NASA Astrophysics Data System (ADS)

    Espinoza Molina, D.; Datcu, M.

    2015-04-01

    The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.

  19. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns

    PubMed Central

    Street, Nichola; Forsythe, Alexandra M.; Reilly, Ronan; Taylor, Richard; Helmy, Mai S.

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics. PMID:27252634

  20. Handling Different Spatial Resolutions in Image Fusion by Multivariate Curve Resolution-Alternating Least Squares for Incomplete Image Multisets.

    PubMed

    Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna

    2018-06-05

    Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.

  1. Integration of Infrared Thermography and Photogrammetric Surveying of Built Landscape

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Rosina, E.; L'Erario, A.; Dìaz-Vilariño, L.

    2017-05-01

    The thermal analysis of buildings represents a key-step for reduction of energy consumption, also in the case of Cultural Heritage. Here the complexity of the constructions and the adopted materials might require special analysis and tailored solutions. Infrared Thermography (IRT) is an important non-destructive investigation technique that may aid in the thermal analysis of buildings. The paper reports the application of IRT on a listed building, belonging to the Cultural Heritage and to a residential one, as a demonstration that IRT is a suitable and convenient tool for analysing the existing buildings. The purposes of the analysis are the assessment of the damages and energy efficiency of the building envelope. Since in many cases the complex geometry of historic constructions may involve the thermal analysis, the integration of IRT and accurate 3D models were developed during the latest years. Here authors propose a solution based on the up-to-date photogrammetric solutions for purely image-based 3D modelling, including automatic image orientation/sensor calibration using Structure-from-Motion and dense matching. Thus, an almost fully automatic pipeline for the generation of accurate 3D models showing the temperatures on a building skin in a realistic manner is described, where the only manual task is given by the measurement of a few common points for co-registration of RGB and IR photogrammetric projects.

  2. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  3. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  4. A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.

    PubMed

    Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean

    2012-02-01

    Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  6. Flow-gated radial phase-contrast imaging in the presence of weak flow.

    PubMed

    Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen

    2013-01-01

    To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.

  7. Learning from Monet: A Fundamentally New Approach to Image Analysis

    NASA Astrophysics Data System (ADS)

    Falco, Charles M.

    2009-03-01

    The hands and minds of artists are intimately involved in the creative process, intrinsically making paintings complex images to analyze. In spite of this difficulty, several years ago the painter David Hockney and I identified optical evidence within a number of paintings that demonstrated artists as early as Jan van Eyck (c1425) used optical projections as aids for producing portions of their images. In the course of making those discoveries, Hockney and I developed new insights that are now being applied in a fundamentally new approach to image analysis. Very recent results from this new approach include identifying from Impressionist paintings by Monet, Pissarro, Renoir and others the precise locations the artists stood when making a number of their paintings. The specific deviations we find when accurately comparing these examples with photographs taken from the same locations provide us with key insights into what the artists' visual skills informed them were the ways to represent these two-dimensional images of three-dimensional scenes to viewers. As will be discussed, these results also have implications for improving the representation of certain scientific data. Acknowledgment: I am grateful to David Hockney for the many invaluable insights into imaging gained from him in our collaboration.

  8. Superresolution intrinsic fluorescence imaging of chromatin utilizing native, unmodified nucleic acids for contrast

    PubMed Central

    Dong, Biqin; Almassalha, Luay M.; Stypula-Cyrus, Yolanda; Urban, Ben E.; Chandler, John E.; Nguyen, The-Quyen; Sun, Cheng; Zhang, Hao F.; Backman, Vadim

    2016-01-01

    Visualizing the nanoscale intracellular structures formed by nucleic acids, such as chromatin, in nonperturbed, structurally and dynamically complex cellular systems, will help expand our understanding of biological processes and open the next frontier for biological discovery. Traditional superresolution techniques to visualize subdiffractional macromolecular structures formed by nucleic acids require exogenous labels that may perturb cell function and change the very molecular processes they intend to study, especially at the extremely high label densities required for superresolution. However, despite tremendous interest and demonstrated need, label-free optical superresolution imaging of nucleotide topology under native nonperturbing conditions has never been possible. Here we investigate a photoswitching process of native nucleotides and present the demonstration of subdiffraction-resolution imaging of cellular structures using intrinsic contrast from unmodified DNA based on the principle of single-molecule photon localization microscopy (PLM). Using DNA-PLM, we achieved nanoscopic imaging of interphase nuclei and mitotic chromosomes, allowing a quantitative analysis of the DNA occupancy level and a subdiffractional analysis of the chromosomal organization. This study may pave a new way for label-free superresolution nanoscopic imaging of macromolecular structures with nucleotide topologies and could contribute to the development of new DNA-based contrast agents for superresolution imaging. PMID:27535934

  9. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  10. Cortical complexity in bipolar disorder applying a spherical harmonics approach.

    PubMed

    Nenadic, Igor; Yotter, Rachel A; Dietzek, Maren; Langbein, Kerstin; Sauer, Heinrich; Gaser, Christian

    2017-05-30

    Recent studies using surface-based morphometry of structural magnetic resonance imaging data have suggested that some changes in bipolar disorder (BP) might be neurodevelopmental in origin. We applied a novel analysis of cortical complexity based on fractal dimensions in high-resolution structural MRI scans of 18 bipolar disorder patients and 26 healthy controls. Our region-of-interest based analysis revealed increases in fractal dimensions (in patients relative to controls) in left lateral orbitofrontal cortex and right precuneus, and decreases in right caudal middle frontal, entorhinal cortex, and right pars orbitalis, and left fusiform and posterior cingulate cortices. While our analysis is preliminary, it suggests that early neurodevelopmental pathologies might contribute to bipolar disorder, possibly through genetic mechanisms. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  11. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  12. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  13. Computer-aided-diagnosis (CAD) for colposcopy

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Ferris, Daron G.

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new and very complex field in medical image processing, the hope is that this paper can provide a framework and basis to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.

  14. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  15. Applications of Fractal Analytical Techniques in the Estimation of Operational Scale

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.

    2000-01-01

    The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and resolutions to represent the varying form of a phenomenon as the pixel size is increased in a convolution process. We have observed that for images of homogeneous land covers, the fractal dimension varies linearly with changes in resolution or pixel size over the range of past, current, and planned space-borne sensors. This relationship differs significantly in images of agricultural, urban, and forest land covers, with urban areas retaining the same level of complexity, forested areas growing smoother, and agricultural areas growing more complex as small pixels are aggregated into larger, mixed pixels. Images of scenes having a mixture of land covers have fractal dimensions that exhibit a non-linear, complex relationship to pixel size. Measuring the fractal dimension of a difference image derived from two images of the same area obtained on different dates showed that the fractal dimension increased steadily, then exhibited a sharp decrease at increasing levels of pixel aggregation. This breakpoint of the fractal dimension/resolution plot is related to the spatial domain or operational scale of the phenomenon exhibiting the predominant visible difference between the two images (in this case, mountain snow cover). The degree to which an image departs from a theoretical ideal fractal surface provides clues as to how much information is altered or lost in the processes of rescaling and rectification. The measured fractal dimension of complex, composite land covers such as urban areas also provides a useful textural index that can assist image classification of complex scenes.

  16. Molecular medicine: a path towards a personalized medicine.

    PubMed

    Miranda, Debora Marques de; Mamede, Marcelo; Souza, Bruno Rezende de; Almeida Barros, Alexandre Guimarães de; Magno, Luiz Alexandre; Alvim-Soares, Antônio; Rosa, Daniela Valadão; Castro, Célio José de; Malloy-Diniz, Leandro; Gomez, Marcus Vinícius; Marco, Luiz Armando De; Correa, Humberto; Romano-Silva, Marco Aurélio

    2012-03-01

    Psychiatric disorders are among the most common human illnesses; still, the molecular and cellular mechanisms underlying their complex pathophysiology remain to be fully elucidated. Over the past 10 years, our group has been investigating the molecular abnormalities in major signaling pathways involved in psychiatric disorders. Recent evidences obtained by the Instituto Nacional de Ciência e Tecnologia de Medicina Molecular (National Institute of Science and Technology - Molecular Medicine, INCT-MM) and others using behavioral analysis of animal models provided valuable insights into the underlying molecular alterations responsible for many complex neuropsychiatric disorders, suggesting that "defects" in critical intracellular signaling pathways have an important role in regulating neurodevelopment, as well as in pathophysiology and treatment efficacy. Resources from the INCT have allowed us to start doing research in the field of molecular imaging. Molecular imaging is a research discipline that visualizes, characterizes, and quantifies the biologic processes taking place at cellular and molecular levels in humans and other living systems through the results of image within the reality of the physiological environment. In order to recognize targets, molecular imaging applies specific instruments (e.g., PET) that enable visualization and quantification in space and in real-time of signals from molecular imaging agents. The objective of molecular medicine is to individualize treatment and improve patient care. Thus, molecular imaging is an additional tool to achieve our ultimate goal.

  17. A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.

    PubMed

    Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V

    2014-01-01

    The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.

  18. A quantitative method to measure biofilm removal efficiency from complex biomaterial surfaces using SEM and image analysis

    NASA Astrophysics Data System (ADS)

    Vyas, N.; Sammons, R. L.; Addison, O.; Dehghani, H.; Walmsley, A. D.

    2016-09-01

    Biofilm accumulation on biomaterial surfaces is a major health concern and significant research efforts are directed towards producing biofilm resistant surfaces and developing biofilm removal techniques. To accurately evaluate biofilm growth and disruption on surfaces, accurate methods which give quantitative information on biofilm area are needed, as current methods are indirect and inaccurate. We demonstrate the use of machine learning algorithms to segment biofilm from scanning electron microscopy images. A case study showing disruption of biofilm from rough dental implant surfaces using cavitation bubbles from an ultrasonic scaler is used to validate the imaging and analysis protocol developed. Streptococcus mutans biofilm was disrupted from sandblasted, acid etched (SLA) Ti discs and polished Ti discs. Significant biofilm removal occurred due to cavitation from ultrasonic scaling (p < 0.001). The mean sensitivity and specificity values for segmentation of the SLA surface images were 0.80 ± 0.18 and 0.62 ± 0.20 respectively and 0.74 ± 0.13 and 0.86 ± 0.09 respectively for polished surfaces. Cavitation has potential to be used as a novel way to clean dental implants. This imaging and analysis method will be of value to other researchers and manufacturers wishing to study biofilm growth and removal.

  19. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  20. New Trends of Emerging Technologies in Digital Pathology.

    PubMed

    Bueno, Gloria; Fernández-Carrobles, M Milagro; Deniz, Oscar; García-Rojo, Marcial

    2016-01-01

    The future paradigm of pathology will be digital. Instead of conventional microscopy, a pathologist will perform a diagnosis through interacting with images on computer screens and performing quantitative analysis. The fourth generation of virtual slide telepathology systems, so-called virtual microscopy and whole-slide imaging (WSI), has allowed for the storage and fast dissemination of image data in pathology and other biomedical areas. These novel digital imaging modalities encompass high-resolution scanning of tissue slides and derived technologies, including automatic digitization and computational processing of whole microscopic slides. Moreover, automated image analysis with WSI can extract specific diagnostic features of diseases and quantify individual components of these features to support diagnoses and provide informative clinical measures of disease. Therefore, the challenge is to apply information technology and image analysis methods to exploit the new and emerging digital pathology technologies effectively in order to process and model all the data and information contained in WSI. The final objective is to support the complex workflow from specimen receipt to anatomic pathology report transmission, that is, to improve diagnosis both in terms of pathologists' efficiency and with new information. This article reviews the main concerns about and novel methods of digital pathology discussed at the latest workshop in the field carried out within the European project AIDPATH (Academia and Industry Collaboration for Digital Pathology). © 2016 S. Karger AG, Basel.

  1. Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.

    2001-01-01

    During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.

  2. High-resolution imaging using a wideband MIMO radar system with two distributed arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi

    2010-05-01

    Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.

  3. Complex network analysis of resting-state fMRI of the brain.

    PubMed

    Anwar, Abdul Rauf; Hashmy, Muhammad Yousaf; Imran, Bilal; Riaz, Muhammad Hussnain; Mehdi, Sabtain Muhammad Muntazir; Muthalib, Makii; Perrey, Stephane; Deuschl, Gunther; Groppa, Sergiu; Muthuraman, Muthuraman

    2016-08-01

    Due to the fact that the brain activity hardly ever diminishes in healthy individuals, analysis of resting state functionality of the brain seems pertinent. Various resting state networks are active inside the idle brain at any time. Based on various neuro-imaging studies, it is understood that various structurally distant regions of the brain could be functionally connected. Regions of the brain, that are functionally connected, during rest constitutes to the resting state network. In the present study, we employed the complex network measures to estimate the presence of community structures within a network. Such estimate is named as modularity. Instead of using a traditional correlation matrix, we used a coherence matrix taken from the causality measure between different nodes. Our results show that in prolonged resting state the modularity starts to decrease. This decrease was observed in all the resting state networks and on both sides of the brain. Our study highlights the usage of coherence matrix instead of correlation matrix for complex network analysis.

  4. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  5. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  6. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  7. Adapting the ISO 20462 softcopy ruler method for online image quality studies

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Phillips, Jonathan B.; Williams, Don

    2013-01-01

    In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.

  8. Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery

    NASA Astrophysics Data System (ADS)

    Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn

    2015-04-01

    Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.

  9. Multipathing Via Three Parameter Common Image Gathers (CIGs) From Reverse Time Migration

    NASA Astrophysics Data System (ADS)

    Ostadhassan, M.; Zhang, X.

    2015-12-01

    A noteworthy problem for seismic exploration is effects of multipathing (both wanted or unwanted) caused by subsurface complex structures. We show that reverse time migration (RTM) combined with a unified, systematic three parameter framework that flexibly handles multipathing can be accomplished by adding one more dimension (image time) to the angle domain common image gather (ADCIG) data. RTM is widely used to generate prestack depth migration images. When using the cross-correlation image condition in 2D prestack migration in RTM, the usual practice is to sum over all the migration time steps. Thus all possible wave types and paths automatically contribute to the resulting image, including destructive wave interferences, phase shifts, and other distortions. One reason is that multipath (prismatic wave) contributions are not properly sorted and mapped in the ADCIGs. Also, multipath arrivals usually have different instantaneous attributes (amplitude, phase and frequency), and if not separated, the amplitudes and phases in the final prestack image will not stack coherently across sources. A prismatic path satisfies an image time for it's unique path; Cavalca and Lailly (2005) show that RTM images with multipaths can provide more complete target information in complex geology, as multipaths usually have different incident angles and amplitudes compared to primary reflections. If the image time slices within a cross-correlation common-source migration are saved for each image time, this three-parameter (incident angle, depth, image time) volume can be post-processed to generate separate, or composite, images of any desired subset of the migrated data. Images can by displayed for primary contributions, any combination of primary and multipath contributions (with or without artifacts), or various projections, including the conventional ADCIG (angle vs depth) plane. Examples show that signal from the true structure can be separated from artifacts caused by multiple arrivals when they have different image times. This improves the quality of images and benefits migration velocity analysis (MVA) and amplitude variation with angle (AVA) inversion.

  10. Sparse dictionary learning for resting-state fMRI analysis

    NASA Astrophysics Data System (ADS)

    Lee, Kangjoo; Han, Paul Kyu; Ye, Jong Chul

    2011-09-01

    Recently, there has been increased interest in the usage of neuroimaging techniques to investigate what happens in the brain at rest. Functional imaging studies have revealed that the default-mode network activity is disrupted in Alzheimer's disease (AD). However, there is no consensus, as yet, on the choice of analysis method for the application of resting-state analysis for disease classification. This paper proposes a novel compressed sensing based resting-state fMRI analysis tool called Sparse-SPM. As the brain's functional systems has shown to have features of complex networks according to graph theoretical analysis, we apply a graph model to represent a sparse combination of information flows in complex network perspectives. In particular, a new concept of spatially adaptive design matrix has been proposed by implementing sparse dictionary learning based on sparsity. The proposed approach shows better performance compared to other conventional methods, such as independent component analysis (ICA) and seed-based approach, in classifying the AD patients from normal using resting-state analysis.

  11. Analysis of interstellar fragmentation structure based on IRAS images

    NASA Technical Reports Server (NTRS)

    Scalo, John M.

    1989-01-01

    The goal of this project was to develop new tools for the analysis of the structure of densely sampled maps of interstellar star-forming regions. A particular emphasis was on the recognition and characterization of nested hierarchical structure and fractal irregularity, and their relation to the level of star formation activity. The panoramic IRAS images provided data with the required range in spatial scale, greater than a factor of 100, and in column density, greater than a factor of 50. In order to construct a densely sampled column density map of a cloud complex which is both self-gravitating and not (yet?) stirred up much by star formation, a column density image of the Taurus region has been constructed from IRAS data. The primary drawback to using the IRAS data for this purpose is that it contains no velocity information, and the possible importance of projection effects must be kept in mind.

  12. Building a virtual simulation platform for quasistatic breast ultrasound elastography using open source software: A preliminary investigation.

    PubMed

    Wang, Yu; Helminen, Emily; Jiang, Jingfeng

    2015-09-01

    Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE.

  13. Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying

    2011-01-01

    Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706

  14. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  15. Analysis of line structure in handwritten documents using the Hough transform

    NASA Astrophysics Data System (ADS)

    Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin

    2010-01-01

    In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.

  16. Quantitative characterization of brain β-amyloid using a joint PiB/FDG PET image histogram

    NASA Astrophysics Data System (ADS)

    Camp, Jon J.; Hanson, Dennis P.; Holmes, David R.; Kemp, Bradley J.; Senjem, Matthew L.; Murray, Melissa E.; Dickson, Dennis W.; Parisi, Joseph; Petersen, Ronald C.; Lowe, Val J.; Robb, Richard A.

    2014-03-01

    A complex analysis performed by spatial registration of PiB and MRI patient images in order to localize the PiB signal to specific cortical brain regions has been proven effective in identifying imaging characteristics associated with underlying Alzheimer's Disease (AD) and Lewy Body Disease (LBD) pathology. This paper presents an original method of image analysis and stratification of amyloid-related brain disease based on the global spatial correlation of PiB PET images with 18F-FDG PET images (without MR images) to categorize the PiB signal arising from the cortex. Rigid registration of PiB and 18F-FDG images is relatively straightforward, and in registration the 18F-FDG signal serves to identify the cortical region in which the PiB signal is relevant. Cortical grey matter demonstrates the highest levels of amyloid accumulation and therefore the greatest PiB signal related to amyloid pathology. The highest intensity voxels in the 18F-FDG image are attributed to the cortical grey matter. The correlation of the highest intensity PiB voxels with the highest 18F-FDG values indicates the presence of β-amyloid protein in the cortex in disease states, while correlation of the highest intensity PiB voxels with mid-range 18F-FDG values indicates only nonspecific binding in the white matter.

  17. Noise-Coupled Image Rejection Architecture of Complex Bandpass ΔΣAD Modulator

    NASA Astrophysics Data System (ADS)

    San, Hao; Kobayashi, Haruo

    This paper proposes a new realization technique of image rejection function by noise-coupling architecture, which is used for a complex bandpass ΔΣAD modulator. The complex bandpass ΔΣAD modulator processes just input I and Q signals, not image signals, and the AD conversion can be realized with low power dissipation. It realizes an asymmetric noise-shaped spectra, which is desirable for such low-IF receiver applications. However, the performance of the complex bandpass ΔΣAD modulator suffers from the mismatch between internal analog I and Q paths. I/Q path mismatch causes an image signal, and the quantization noise of the mirror image band aliases into the desired signal band, which degrades the SQNDR (Signal to Quantization Noise and Distortion Ratio) of the modulator. In our proposed modulator architecture, an extra notch for image rejection is realized by noise-coupled topology. We just add some passive capacitors and switches to the modulator; the additional integrator circuit composed of an operational amplifier in the conventional image rejection realization is not necessary. Therefore, the performance of the complex modulator can be effectively raised without additional power dissipation. We have performed simulation with MATLAB to confirm the validity of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of image-rejection effectively, and improve the SQNDR of the complex bandpass ΔΣAD modulator.

  18. Ideal AFROC and FROC observers.

    PubMed

    Khurd, Parmeshwar; Liu, Bin; Gindi, Gene

    2010-02-01

    Detection of multiple lesions in images is a medically important task and free-response receiver operating characteristic (FROC) analyses and its variants, such as alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis techniques for lesion detection in medical images. In this paper, we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area, or any partial area, under the associated AFROC or FROC curve. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We also consider a Bayes risk analysis for the multiple-signal detection task with an appropriate definition of costs. A general decision strategy that minimizes Bayes risk is derived. With particular cost constraints, this general decision strategy reduces to the decision strategy associated with the ideal AFROC or FROC observer.

  19. NET: a new framework for the vectorization and examination of network data.

    PubMed

    Lasser, Jana; Katifori, Eleni

    2017-01-01

    The analysis of complex networks both in general and in particular as pertaining to real biological systems has been the focus of intense scientific attention in the past and present. In this paper we introduce two tools that provide fast and efficient means for the processing and quantification of biological networks like Drosophila tracheoles or leaf venation patterns: the Network Extraction Tool ( NET ) to extract data and the Graph-edit-GUI ( GeGUI ) to visualize and modify networks. NET is especially designed for high-throughput semi-automated analysis of biological datasets containing digital images of networks. The framework starts with the segmentation of the image and then proceeds to vectorization using methodologies from optical character recognition. After a series of steps to clean and improve the quality of the extracted data the framework produces a graph in which the network is represented only by its nodes and neighborhood-relations. The final output contains information about the adjacency matrix of the graph, the width of the edges and the positions of the nodes in space. NET also provides tools for statistical analysis of the network properties, such as the number of nodes or total network length. Other, more complex metrics can be calculated by importing the vectorized network to specialized network analysis packages. GeGUI is designed to facilitate manual correction of non-planar networks as these may contain artifacts or spurious junctions due to branches crossing each other. It is tailored for but not limited to the processing of networks from microscopy images of Drosophila tracheoles. The networks extracted by NET closely approximate the network depicted in the original image. NET is fast, yields reproducible results and is able to capture the full geometry of the network, including curved branches. Additionally GeGUI allows easy handling and visualization of the networks.

  20. Analysis and classification of commercial ham slice images using directional fractal dimension features.

    PubMed

    Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen

    2009-02-01

    This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.

Top